Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
15,300 | <ASSISTANT_TASK:>
Python Code:
cml = pycml.io.examples.read_one_cml()
# Remove artifacts and plot data
cml.process.quality_control.set_to_nan_if('tx', '>=', 100)
cml.process.quality_control.set_to_nan_if('rx', '==', -99.9)
cml.plot_data(['tx', 'rx', 'txrx']);
cml.process.wet_dry.std_dev(window_length=30, threshold=0.8)
cml.plot_data(['txrx', 'wet']);
cml.process.baseline.constant()
cml.process.baseline.calc_A()
ax = cml.plot_data(['txrx', 'wet', 'baseline', 'A']);
ax[0].set_xlim('2016-10-25 00:00', '2016-10-25 10:00');
# Save a copy of these results for comparing them to the linear baseline later
baseline_constant = cml.channel_1.data.baseline.copy()
A_constant = cml.channel_1.data.A.copy()
cml.process.baseline.linear()
cml.process.baseline.calc_A()
ax = cml.plot_data(['txrx', 'wet', 'baseline', 'A']);
ax[0].set_xlim('2016-10-25 00:00', '2016-10-25 10:00');
# Save a copy of these results for comparing them to the constant baseline
baseline_linear = cml.channel_1.data.baseline.copy()
A_linear = cml.channel_1.data.A.copy()
fig, ax = plt.subplots(2, 1, figsize=(10, 4), sharex=True)
ax[0].plot(baseline_constant, color='C3', label='constant baseline')
ax[0].plot(baseline_linear, color='C4', label='linear baseline')
ax[1].plot(A_constant, color='C3', label='constant baseline')
ax[1].plot(A_linear, color='C4', label='linear baseline')
ax[0].set_xlim('2016-10-25 00:00', '2016-10-25 10:00');
ax[0].set_ylabel('baseline')
ax[1].set_ylabel('A')
ax[0].legend();
# Exchange the current `wet` pd.Series in `channel_1` with a different series of floats with some `NaN`s
wet_temp = cml.channel_1.data.wet.astype(float)
wet_temp['2016-10-25 04:45': '2016-10-25 05:00'] = np.NaN
cml.channel_1.data.wet = wet_temp
cml.process.baseline.constant()
cml.process.baseline.calc_A()
ax = cml.plot_data(['txrx', 'wet', 'baseline', 'A']);
ax[0].set_xlim('2016-10-25 00:00', '2016-10-25 10:00');
cml.process.baseline.linear()
cml.process.baseline.calc_A()
ax = cml.plot_data(['txrx', 'wet', 'baseline', 'A']);
ax[0].set_xlim('2016-10-25 00:00', '2016-10-25 10:00');
cml.process.baseline.linear(ignore_nan=True)
cml.process.baseline.calc_A()
ax = cml.plot_data(['txrx', 'wet', 'baseline', 'A']);
ax[0].set_xlim('2016-10-25 00:00', '2016-10-25 10:00');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Do a simple wet/dry classification
Step2: Derive a constant baseline
Step3: Or derive a linear baseline
Step4: Compare the results from constant and linear baseline
Step5: NaN handling
Step6: Constant baseline
Step7: Linear baseline (default)
Step8: Linear baseline (ignoring NaNs)
|
15,301 | <ASSISTANT_TASK:>
Python Code:
x = np.linspace(-3, 3, 100)
print(x)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.uniform(size=len(x))
plt.plot(x, y, 'o');
print('Before: ', x.shape)
X = x[:, np.newaxis]
print('After: ', X.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
print('Weight coefficients: ', regressor.coef_)
print('y-axis intercept: ', regressor.intercept_)
min_pt = X.min() * regressor.coef_[0] + regressor.intercept_
max_pt = X.max() * regressor.coef_[0] + regressor.intercept_
plt.plot([X.min(), X.max()], [min_pt, max_pt])
plt.plot(X_train, y_train, 'o');
y_pred_train = regressor.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data")
plt.plot(X_train, y_pred_train, 'o', label="prediction")
plt.plot([X.min(), X.max()], [min_pt, max_pt], label='fit')
plt.legend(loc='best')
y_pred_test = regressor.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data")
plt.plot(X_test, y_pred_test, 'o', label="prediction")
plt.plot([X.min(), X.max()], [min_pt, max_pt], label='fit')
plt.legend(loc='best');
regressor.score(X_test, y_test)
# %load solutions/06B_lin_with_sine.py
from sklearn.neighbors import KNeighborsRegressor
kneighbor_regression = KNeighborsRegressor(n_neighbors=1)
kneighbor_regression.fit(X_train, y_train)
y_pred_train = kneighbor_regression.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data", markersize=10)
plt.plot(X_train, y_pred_train, 's', label="prediction", markersize=4)
plt.legend(loc='best');
y_pred_test = kneighbor_regression.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data", markersize=8)
plt.plot(X_test, y_pred_test, 's', label="prediction", markersize=4)
plt.legend(loc='best');
kneighbor_regression.score(X_test, y_test)
# %load solutions/06A_knn_vs_linreg.py
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Regression
Step2: Again, we start by splitting our dataset into a training (75%) and a test set (25%)
Step3: Next, we use the learning algorithm implemented in LinearRegression to fit a regression model to the training data
Step4: After fitting to the training data, we paramerterized a linear regression model with the following values.
Step5: Since our regression model is a linear one, the relationship between the target variable (y) and the feature variable (x) is defined as
Step6: Similar to the estimators for classification in the previous notebook, we use the predict method to predict the target variable. And we expect these predicted values to fall onto the line that we plotted previously
Step7: As we can see in the plot above, the line is able to capture the general slope of the data, but not many details.
Step8: Again, scikit-learn provides an easy way to evaluate the prediction quantitatively using the score method. For regression tasks, this is the R<sup>2</sup> score. Another popular way would be the Mean Squared Error (MSE). As its name implies, the MSE is simply the average squared difference over the predicted and actual target values
Step9: <div class="alert alert-success">
Step10: KNeighborsRegression
Step11: Again, let us look at the behavior on training and test set
Step12: On the training set, we do a perfect job
Step13: On the test set, we also do a better job of capturing the variation, but our estimates look much messier than before.
Step14: Much better than before! Here, the linear model was not a good fit for our problem; it was lacking in complexity and thus under-fit our data.
|
15,302 | <ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.datasource import download_data
data_train = download_data("off_train_all.zip",
url="https://raw.githubusercontent.com/sdpython/data/master/OpenFoodFacts/")
data_test = download_data("off_test_all.zip",
url="https://raw.githubusercontent.com/sdpython/data/master/OpenFoodFacts/")
import pandas
df = pandas.read_csv("off_test_all.txt", sep="\t", encoding="utf8", low_memory=False)
df.head()
df.head(n=2).T[:50]
df.head(n=2).T[50:100]
df.head(n=2).T[100:150]
df.head(n=2).T[150:]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Récupérer les données
|
15,303 | <ASSISTANT_TASK:>
Python Code:
#Invoke numPy and matplotlib in one command
%pylab inline
#IMPORTING STOCK DATA USING PANDAS
from pandas.io.data import DataReader
from datetime import datetime
goog = DataReader("GOOG", "yahoo", datetime(2014,4,1), datetime(2015,3,31))
stkp = goog["Adj Close"]
print stkp
goog.head()
goog.index
t = goog.index
plot(t,stkp)
xlabel("Date")
ylabel("Stock Price")
n = len(t)-1
rets = zeros(n)
for j in range(n):
rets[j] = log(stkp[j+1]/stkp[j])
plot(rets)
ylabel("Returns")
hist(rets,25)
goog.describe()
import scipy.stats as ss
print("Skewness = ",ss.skew(rets))
print("Kurtosis = ",ss.kurtosis(rets))
#CHECK IF THIS IS EXCESS KURTOSIS or PLAIN KURTOSIS
x = randn(1000000)
print(ss.kurtosis(x))
#SENDING DATA VARIABLES TO R
%load_ext rpy2.ipython
#THIS ALLOWS US TO USE R INSIDE THE NOTEBOOK
#SENDS DATA FROM PYTHON TO R
%Rpush stkp
#PREFIX NEEDED TO CALL R INSTEAD OF PYTHON
%R plot(stkp,type="l",col="red",lwd=2)
#GETTING DATA BACK FROM R TO PYTHON
%R ret = diff(log(stkp))
#GET DATA BACK FROM R TO PYTHON
ret = %Rget ret
plot(ret)
%R print(summary(ret))
%%R
library(quantmod)
getSymbols(c("C","AAPL","CSCO","YHOO","IBM"))
%%R
citi = as.matrix(C$C.Adjusted)
aapl = as.matrix(AAPL$AAPL.Adjusted)
csco = as.matrix(CSCO$CSCO.Adjusted)
yhoo = as.matrix(YHOO$YHOO.Adjusted)
ibm = as.matrix(IBM$IBM.Adjusted)
%%R
stkdata = data.frame(cbind(citi,aapl,csco,yhoo,ibm))
rn = rownames(stkdata)
head(stkdata)
stkdata = %Rget stkdata
rn = %Rget rn
stkdata
rn
import pandas as pd
stk = pd.DataFrame(stkdata)
stk = stk.T
stk.head()
stk.columns=["C","AAPL","CSCO","YHOO","IBM"]
stk.index = rn
stk.head()
plot(stk["AAPL"])
stk.ix['2007-01-03']
stk.ix['2007-01-03']["AAPL"]
stk["extra"] = 1.0
stk.head()
sort(stk["AAPL"])
stk.head()
stk = stk.drop("extra",axis=1) #IF AXIS=0 (default), THEN ROW IS DROPPED
stk.head()
stk[["AAPL","IBM"]].head()
stk[stk["AAPL"]<11]
stk[stk["AAPL"]<11]["IBM"]
(stk < 50).head()
sum(stk)
#USING FUNCTIONS ON DATA FRAMES
f = lambda x: x.max() - x.min()
stk.apply(f)
def g(x):
return pd.Series([x.mean(),x.std(),x.min(),x.max()], index=['mean','stdev','min','max'])
stk.apply(g)
stk.sort_index(axis=1,ascending=False).head()
stk.sum()
stk.mean()
stk.describe()
stk.diff().head()
stk.pct_change().head()
stk.pct_change().mean()*252.0
stk.pct_change().std()*sqrt(252.0)
rets = stk.pct_change()
rets.corr()
rets.cov()
sqrt(diag(rets.cov())*252.0)
rets.corrwith(rets.AAPL)
import pandas.io.data as pid
panel = pd.Panel(dict((stock, pid.get_data_yahoo(stock,'1/1/2014','2/28/2015')) for stock in ['C','AAPL','CSCO','YHOO','IBM']))
panel
panel = panel.swapaxes('items','minor')
panel
panel['Adj Close'].head()
panel.ix[:,'1/3/2014',:]
import pandas as pd
data = pd.read_table("markowitzdata.txt")
data.head()
gdata = pd.read_csv("goog.csv")
gdata.head()
scatter(data["mktrf"],data["IBM"])
xlabel("Market return")
ylabel("IBM return")
grid(True)
from scipy import stats
y = data["IBM"]
x = data["mktrf"]
b, a, r_value, p_value, std_err = stats.linregress(x,y)
print "Intercept = ",a
print "slope (beta) = ",b
import pandas as pd
import pandas.io.data as web
aapl = web.DataReader('AAPL',data_source='google',start='1/1/2104',end='4/1/2015')
aapl.head()
aapl.tail()
aapl['cont_ret'] = log(aapl['Close']/aapl['Close'].shift(1))
aapl.head()
aapl['Vols'] = pd.rolling_std(aapl['cont_ret'],window=5)*sqrt(252.0)
aapl.tail()
aapl.head(10)
aapl[['Close','Vols']].plot(subplots=True,color='blue',figsize=(8,6))
sd=0.20; cv=0.01; m=100
n = range(1,m+1)
sd_p = zeros(m)
for j in n:
cv_mat = matrix(ones((j,j))*cv)
fill_diagonal(cv_mat,sd**2)
w = matrix(ones(j)*(1.0/j)).T
sd_p[j-1] = sqrt((w.T).dot(cv_mat).dot(w))
plot(n,sd_p)
xlabel('#stocks')
ylabel('stddev of portfolio')
grid()
g_ret = ((1+0.30)*(1-0.20))**0.5-1
print "Geometric mean return = ", g_ret
a_ret = 0.5*(0.30+(-0.20))
print "Arithmetic mean return per year = ",a_ret
ret = zeros(4)
ret[0] = (1+0.3)*(1+0.3)
ret[1] = (1+0.3)*(1-0.2)
ret[2] = (1-0.2)*(1+0.3)
ret[3] = (1-0.2)*(1-0.2)
two_year_return = 0.25*sum(ret)
print "Expected two year return = ", two_year_return
print "Expected two year return (annualized) = ", two_year_return**0.5
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SHARING VARIABLES FROM PYTHON TO R
Step2: GETTING STOCK DATA FROM R INTO PYTHON
Step3: PANDAS IS AWESOME
Step4: STACKING UP DATA IN PANELS
Step5: READING DATA FROM FILES
Step6: BRINGING IT ALL TOGETHER
Step7: DIVERSIFICATION
Step8: Arithmetic vs Geometric mean returns
Step9: Which one is more realistic in predicting future expected returns over the next two years? Note that there are 4 cases to consider for outcomes, all with equal probability $1/4$.
|
15,304 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('weights_heights.csv', index_col='Index')
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
# Ваш код здесь
data.head()
# Ваш код здесь
data.plot(y='Weight', kind='hist', color='green',
title='Weight (inch.) distribution')
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
# Ваш код здесь
sns.pairplot(data)
def weight_category(weight):
# Ваш код здесь
if weight < 120.0:
return 1
elif weight >= 150.0:
return 3
else:
return 2
data['weight_cat'] = data['Weight'].apply(weight_category)
# Ваш код здесь
sns.boxplot(x="weight_cat", y="Height", data=data[['Height', 'weight_cat']])
# Ваш код здесь
data.plot(x='Weight', y='Height', kind='scatter',
title='Height & Weight dependence')
# Ваш код здесь
def mse(w0, w1):
error = 0.0
for i in range(len(data)):
error += (data.iloc[i]['Height'] - (w0 + w1 * data.iloc[i]['Weight']))**2
return error
data.plot(x='Weight', y='Height', kind='scatter',
title='Two lines')
x = np.arange(min(data['Weight']), max(data['Weight']))
y_1, y_2 = 60.0 + 0.05 * x, 50.0 + 0.16 * x
plt.plot(x, y_1, 'r')
plt.plot(x, y_2, 'g')
# Ваш код здесь
w1 = np.arange(-20, 20, 1)
plt.plot(w1, mse(50.0, w1))
plt.xlabel('w1')
plt.ylabel('mse')
plt.title('mse from w1 & w0 = 50.0')
plt.show()
# Ваш код здесь
from scipy import optimize
w1_opt = optimize.minimize_scalar(lambda w1: mse(50, w1), bounds=[-5, 5]).x
# Ваш код здесь
data.plot(x='Weight', y='Height', kind='scatter',
title='Height & Weight dependence')
x = np.arange(min(data['Weight']), max(data['Weight']))
y = 50.0 + w1_opt * x
plt.plot(x, y, 'g')
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
# Ваш код здесь
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = [mse(X[i], Y[i]) for i in xrange(len(X))]
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
# Ваш код здесь
err = lambda (w0, w1): mse(w0, w1)
res = optimize.minimize(err, x0=[0, 0], method='L-BFGS-B', bounds=((-100, 100), (-5, 5)))
print res
# Ваш код здесь
data.plot(x='Weight', y='Height', kind='scatter',
title='Height & Weight dependence')
y = res.x[0] + res.x[1] * x
plt.plot(x, y, 'r')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Считаем данные по росту и весу (weights_heights.csv, приложенный в задании) в объект Pandas DataFrame
Step2: Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
Step3: Аргументы
Step4: Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (m - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter_matrix$ Pandas Data Frame или pairplot библиотеки Seaborn.
Step5: [3]. Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод pairplot библиотеки Seaborn.
Step6: Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
Step7: [5]. Постройте scatter plot зависимости роста от веса, используя метод plot для Pandas DataFrame с аргументом kind='scatter'. Подпишите картинку.
Step8: Задание 2. Минимизация квадратичной ошибки
Step9: Итак, мы решаем задачу
Step10: Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
Step11: Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
Step12: При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
Step13: Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
Step14: [10]. Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
Step15: [11]. С помощью метода minimize из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
|
15,305 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.io import wavfile
# Let's start with the first sample corresponding to the lower string E2
rate, myrecording = wavfile.read("samples/guitar_E2_very-long_forte_normal.wav")
print(rate, np_array.size)
duration = 1 # seconds
fs = rate # samples by second
# Let's restrict our sample to 1 second of the recording, after 0.5 second of sound to avoid the string picking
array = myrecording[int(0.5*fs):int(2.5*fs)]
print(array.size)
df = pd.DataFrame(array)
df.loc[25000:35000].plot()
fourier = np.fft.fft(array)
plt.plot(abs(fourier[:len(fourier)/10]))
f_max_index = np.argmax(abs(fourier[:fourier.size/2]))
freqs = np.fft.fftfreq(len(fourier))
freqs[f_max_index]*fs
# Work in progress: coming soon
rec = array
rec = rec[15000:35000]
autocorr = np.correlate(rec, rec, mode='same')
plt.plot(autocorr)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use scipy from the Anaconda distribution to read the WAV sample files
Step2: We define the length we want to record in seconds and the sampling rate to the source file sample rate (44100 Hz)
Step3: Let's plot a section of this array to look at it first
Step4: First method
Step5: We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency
Step6: We notice already things are not going to be that easy. There are different harmonics picked here, and 2 of the most important ones are comparable in amplitude.
Step7: This method detects a fundamental frequency of 248Hz, which is wrong.
Step8: WIP
|
15,306 | <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Stanislas Chambon <stan.chambon@gmail.com>
# Joan Massich <mailsik@gmail.com>
#
# License: BSD Style.
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets.sleep_physionet.age import fetch_data
from mne.time_frequency import psd_welch
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
ALICE, BOB = 0, 1
[alice_files, bob_files] = fetch_data(subjects=[ALICE, BOB], recording=[1])
mapping = {'EOG horizontal': 'eog',
'Resp oro-nasal': 'misc',
'EMG submental': 'misc',
'Temp rectal': 'misc',
'Event marker': 'misc'}
raw_train = mne.io.read_raw_edf(alice_files[0])
annot_train = mne.read_annotations(alice_files[1])
raw_train.set_annotations(annot_train, emit_warning=False)
raw_train.set_channel_types(mapping)
# plot some data
raw_train.plot(duration=60, scalings='auto')
annotation_desc_2_event_id = {'Sleep stage W': 1,
'Sleep stage 1': 2,
'Sleep stage 2': 3,
'Sleep stage 3': 4,
'Sleep stage 4': 4,
'Sleep stage R': 5}
events_train, _ = mne.events_from_annotations(
raw_train, event_id=annotation_desc_2_event_id, chunk_duration=30.)
# create a new event_id that unifies stages 3 and 4
event_id = {'Sleep stage W': 1,
'Sleep stage 1': 2,
'Sleep stage 2': 3,
'Sleep stage 3/4': 4,
'Sleep stage R': 5}
# plot events
mne.viz.plot_events(events_train, event_id=event_id,
sfreq=raw_train.info['sfreq'])
# keep the color-code for further plotting
stage_colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
tmax = 30. - 1. / raw_train.info['sfreq'] # tmax in included
epochs_train = mne.Epochs(raw=raw_train, events=events_train,
event_id=event_id, tmin=0., tmax=tmax, baseline=None)
print(epochs_train)
raw_test = mne.io.read_raw_edf(bob_files[0])
annot_test = mne.read_annotations(bob_files[1])
raw_test.set_annotations(annot_test, emit_warning=False)
raw_test.set_channel_types(mapping)
events_test, _ = mne.events_from_annotations(
raw_test, event_id=annotation_desc_2_event_id, chunk_duration=30.)
epochs_test = mne.Epochs(raw=raw_test, events=events_test, event_id=event_id,
tmin=0., tmax=tmax, baseline=None)
print(epochs_test)
# visualize Alice vs. Bob PSD by sleep stage.
fig, (ax1, ax2) = plt.subplots(ncols=2)
# iterate over the subjects
stages = sorted(event_id.keys())
for ax, title, epochs in zip([ax1, ax2],
['Alice', 'Bob'],
[epochs_train, epochs_test]):
for stage, color in zip(stages, stage_colors):
epochs[stage].plot_psd(area_mode=None, color=color, ax=ax,
fmin=0.1, fmax=20., show=False,
average=True, spatial_colors=False)
ax.set(title=title, xlabel='Frequency (Hz)')
ax2.set(ylabel='µV^2/Hz (dB)')
ax2.legend(ax2.lines[2::3], stages)
plt.show()
def eeg_power_band(epochs):
EEG relative power band feature extraction.
This function takes an ``mne.Epochs`` object and creates EEG features based
on relative power in specific frequency bands that are compatible with
scikit-learn.
Parameters
----------
epochs : Epochs
The data.
Returns
-------
X : numpy array of shape [n_samples, 5]
Transformed data.
# specific frequency bands
FREQ_BANDS = {"delta": [0.5, 4.5],
"theta": [4.5, 8.5],
"alpha": [8.5, 11.5],
"sigma": [11.5, 15.5],
"beta": [15.5, 30]}
psds, freqs = psd_welch(epochs, picks='eeg', fmin=0.5, fmax=30.)
# Normalize the PSDs
psds /= np.sum(psds, axis=-1, keepdims=True)
X = []
for fmin, fmax in FREQ_BANDS.values():
psds_band = psds[:, :, (freqs >= fmin) & (freqs < fmax)].mean(axis=-1)
X.append(psds_band.reshape(len(psds), -1))
return np.concatenate(X, axis=1)
pipe = make_pipeline(FunctionTransformer(eeg_power_band, validate=False),
RandomForestClassifier(n_estimators=100, random_state=42))
# Train
y_train = epochs_train.events[:, 2]
pipe.fit(epochs_train, y_train)
# Test
y_pred = pipe.predict(epochs_test)
# Assess the results
y_test = epochs_test.events[:, 2]
acc = accuracy_score(y_test, y_pred)
print("Accuracy score: {}".format(acc))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred, target_names=event_id.keys()))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: Extract 30s events from annotations
Step3: Create Epochs from the data based on the events found in the annotations
Step4: Applying the same steps to the test data from Bob
Step5: Feature Engineering
Step7: Design a scikit-learn transformer from a Python function
Step8: Multiclass classification workflow using scikit-learn
Step9: In short, yes. We can predict Bob's sleeping stages based on Alice's data.
|
15,307 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (1, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
s_sentences = source_text.split('\n')
t_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
source_ids = [[source_vocab_to_int[word] for word in line.split()] for line in s_sentences]
target_ids = [[target_vocab_to_int[word] for word in line.split()] for line in t_sentences]
return (source_ids, target_ids)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, (None, None), name='input')
targets = tf.placeholder(tf.int32, (None, None), name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
go = target_vocab_to_int['<GO>']
target_data = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])
aux_data = tf.fill([batch_size, 1], go)
target_data = tf.concat([aux_data, target_data],1)
return target_data
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm_cell = tf.contrib.rnn.BasicLSTMCell(num_units = rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell] * num_layers)
state_0, state_1 = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return state_1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
decoder = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state, name='decoder_train')
decoder_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
decoder_outputs,_,__ = tf.contrib.seq2seq.dynamic_rnn_decoder(decoder_drop, decoder,
inputs = dec_embed_input,
sequence_length = sequence_length,
scope = decoding_scope)
return output_fn(decoder_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn = output_fn,
encoder_state = encoder_state,
embeddings = dec_embeddings,
start_of_sequence_id = start_of_sequence_id,
end_of_sequence_id = end_of_sequence_id,
maximum_length = maximum_length,
num_decoder_symbols = vocab_size,
name = 'inference_decoder')
outputs,_,__ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell = dec_cell,
decoder_fn = decoder_fn,
scope=decoding_scope,
name='inference_decoder_rnn')
return outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
output = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope = dec_scope)
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
maximum_length = sequence_length
with tf.variable_scope('training') as dec_scope:
tr_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
sequence_length, dec_scope, output, keep_prob)
dec_scope.reuse_variables()
inf_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings,
start_of_sequence_id, end_of_sequence_id, maximum_length,
vocab_size, dec_scope, output, keep_prob)
return tr_logits, inf_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
encoder_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encoder_layer = encoding_layer(encoder_input, rnn_size, num_layers,keep_prob)
decoder_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
decoder_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size], minval=0))
target_embed = tf.nn.embedding_lookup(decoder_embed, decoder_input)
tr_logits, inf_logits = decoding_layer(target_embed, decoder_embed, encoder_layer, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int,
keep_prob)
return tr_logits, inf_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 7
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 64
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.005
# Dropout Keep Probability
keep_probability = 0.85
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
word_ids = []
for word in sentence.lower().split():
try:
word_ids.append(vocab_to_int[word])
except KeyError:
word_ids.append(vocab_to_int['<UNK>'])
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
15,308 | <ASSISTANT_TASK:>
Python Code:
mystring = 'economics'
# return the string capitalized
mystring.upper()
# count the number of occurunces of the letter 'o'
mystring.count('o')
# tell me if the string ends with the letter 'M'
mystring.endswith('M')
import numpy as np
from geqfarm import *
myeconomy= Economy(20)
myeconomy.GAMMA
myeconomy.GAMMA = 0.9
myeconomy.print_params()
N = 5
E = Economy(N)
E.ALPHA = 0.5
E.GAMMA = 0.90
eqnE = E.smallhold_eq([E.TBAR, E.LBAR], E.s)
eqnE.w
eqnE.X
E.s[0] = 1.2
E.s
eqnE = E.smallhold_eq([E.TBAR, E.LBAR], E.s)
eqnE.w
eqnE.X
%load_ext autoreload
%autoreload 2
import numpy as np
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = (10, 8)
np.set_printoptions(precision=4)
%matplotlib inline
N = 5
E = Economy(N) # an instance takes N length as parameter
s = np.ones(N) # skill distribution in the economy
E.ALPHA = 0.5
E.GAMMA = 0.90
E.TBAR,E.LBAR
Xc = E.smallhold_eq([E.TBAR,E.LBAR],s)
Xc
(Xrc,Xr,wc,wr) = scene_print(E,10, detail=True)
factor_plot(E,Xrc,Xr)
TLratio_plot(E,Xrc,Xr)
E.s[-1]=1.10
(Xrc,Xr,wc,wr) = scene_print(E,10,detail=True)
factor_plot(E,Xrc,Xr)
TLratio_plot(E,Xrc,Xr)
from scipy.stats import lognorm
def intLabor(s,mu,sigma,al,ak,phi,rho,gam):
val1=LaborLandRat(s,al,ak,phi,rho,gam)
val2=lognorm.pdf(s,sigma,mu)
return val1*val2
fig, ax = plt.subplots(1, 1)
s = 0.954
mean, var, skew, kurt = lognorm.stats(s, moments='mvsk')
x = np.linspace(lognorm.ppf(0.01, s), lognorm.ppf(0.99, s), 100)
ax.plot(x, lognorm.pdf(x, s),'r-', lw=5, alpha=0.6, label='lognorm pdf')
s = 0.1 # shape parameter
x = np.linspace(lognorm.ppf(0.01, s), lognorm.ppf(0.99, s), 100)
#x = np.linspace(0.1,5, 100)
loc = 0
scale = 1
fig, ax = plt.subplots(1, 1)
rv = lognorm(s, loc, scale)
ax.plot(x, rv.pdf(x), 'ko', lw=2, label='frozen pdf')
plt.show()
lognorm.ppf(0.99, s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: python treats mystring as an instance of a string object. One then has access to a long list of attributes and methods associated with this object. In a jupyter notebook if you type the variable name mystring followed by a period and then hit the tab key you will see a list of available attributes and methods. Here are a few
Step2: Class statements to create new objects
Step3: If you take a look at the code you will see how I have used class statements to create a new prototype `Economy object. An object of type Economy has attributes such as the number of households in the economy, parameters of the production function, and arrays that summarize the initial distribution of skill, land and labor across households. Once this class of object is defined one can make assignments such as the following
Step4: This creates myeconomy as an instance of an Economy object. Several attributes are set to default values. We can easily find out what these are. For instance this is an economy with a production function with the $\gamma$ paramter which measures the extent of homogeneity or diseconomies of scale. To find out what value it's set to we just type
Step5: And we can easily change it to another value
Step6: I've written a method to get a convenient summary of all important parameters
Step7: For example, the number of households is $N=20$, total land endowment and labor force are both set to 100 and $s$ is an array of skills across each of those 20 households. For the moment every household has the same skill level.
Step8: I've written a simple object method which calculates a competitive equilibrium allocation for the economy. I've set it up to take as required inputs a vector of economywide endownments and an array summarizing the skill distribution. It returns a 'named-tuple' with the vector of equilibrium factor prices, and the optimal land and labor allocations on each farm.
Step9: The competitive equilibrium land rent and labor wage that clears this market are given by
Step10: The optimal allocation of land and labor to each of the five farm skill groups is given by
Step11: Since every one of the 5 farm household has the same skill and there are 100 units of land and labor the optimal allocation is for every farm to operate with 20 units of land and 20 units of labor.
Step12: Let's recalculate the competive equilibrium
Step13: Since there is more skill in this economy we would expect real wages and rents to rise, as indeed they do
Step14: Since farm household zero now has a considerably higher level of skill compare to other households it's optimal scale of operations increases and, since it bids on the market for the same factors driving up the costs to other farms, the optimal scal of operations of the other farms shrinks.
Step15: Equilibria with credit market distortions
Step16: What follows is a simple description of my (2007) 'Latifundia Economics' paper, which I used to try to explain the Latifundia-Minifundia complex of bimodal distribution of farm sizes that existed for long periods of time in many parts of historical Latin America and other parts of the world.
Step17: Now create an economy and change a few parameters from their default. The $\gamma$ parameter which measures the degree of homogeneity in production is purposefully set very high.. We are very close to assuming constant returns to scale (but setting it just below 1 is needed to make sure the size-distribution remains determinate).
Step18: The Economy has default resource endowment
Step19: So as expected the efficient (competitive) resource allocation has every farmer operating a farm of equal unit size. The smallhold_eq method gives the competitive (efficient) allocation for a subeconomy with XE = (TE,LE) endowments and skill distribution s. It returns a named tuple with equilibrium facor prices and X=[T,L] allocations.
Step20: Thus far we've said nothing of the ownership of land or labor. Let's assume every household has one unit of labor but that the 'landlord' class (which WLOG we index to be the last skill group s[-1]) owns fraction $\theta$ of the land. Assuming a uniform distribution of households across skills every skill group has Lbar/N households, and so there are that many landlords who act as a single cartel.
Step21: In the example above the 'landlord' farmer was in every way the same as the other farmers, the only difference being he had more land ownership (fraction $\theta$ of the total). He had the same skill parameter as every other farmer. In an efficient equilibrium his operational farm size should therefore be the same size as every other farmer. The plot above shows how monopoly power (which rises with $\theta$ allows the monopolist to distort the economy -- he withholds land from the lease market to drive up the land rental rate and, since this deprives the 'fringe' of farmers of land, lowers the marginal product of labor on each smallholder farm, increasing the smallholder labor supply to the market which pushes down the labor wage. Hence we see how at higher levels of $\theta$ the landlord expands the size of his estate and establish monopsony power wages.
Step22: Let's recalculate the new equilibria under the different scenarios.
Step23: Given that he is more skilled than before the landlord's efficient scale of production has increased. This lowers the cost of being big. Interestingly at low $\theta$ this leads the landlord to hire less land and labor ...
|
15,309 | <ASSISTANT_TASK:>
Python Code:
def grafica_KMeans(X1,X2,Y,clf):
X1=X[:, 0]
X2=X[:, 1]
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = X1.min()-1, X1.max() +1
y_min, y_max = X2.min()-1, X2.max() +1
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 200), np.linspace(y_min, y_max, 200))
# obtener colores para sus modelos
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
# puntos
plt.scatter(X1,X2, c=Y,cmap=plt.cm.Paired)
# centros
mu = clf.cluster_centers_
plt.scatter(mu[:,0], mu[:,1], s=100, c=np.unique(Y),cmap=plt.cm.Paired,lw=2)
# limites de datos
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
return
def grafica_SVC(X1,X2,clf):
plt.axis('tight')
x_min = X1.min()
x_max = X1.max()
y_min = X2.min()
y_max = X2.max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
return
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
%matplotlib inline
# Estos son nuevos!
import sklearn as sk
from sklearn import preprocessing
from sklearn import cluster, svm
import sklearn.cross_validation as cv
df=pd.read_csv('files/ejemplo.csv')
df.head()
# Transformar X
X = df[['X','Y']].values
# Transformar Y
encoder = preprocessing.LabelEncoder()
encoder.fit(["A","B"])
Y = encoder.transform(df['Tipo'])
print('Forma de X: ',X.shape)
print('Forma de Y: ',Y.shape)
print(Y)
plt.scatter(X[:,0], X[:,1],c=Y,cmap=plt.cm.Paired)
plt.title('Datos Ejemplo')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
clf = cluster.KMeans(2)
clf.fit(X,Y)
y_pred = clf.predict(X)
# no te preocupes de esta formula
error= 1 - np.sum(np.abs(y_pred - Y))/float(len(Y))
score = clf.score(X,Y)
print("Precision es ",error)
print("Score es ",score)
X1 = X[:,0]
X2 = X[:,1]
# sacar centros y visualizar
mu = clf.cluster_centers_
plt.scatter(mu[:,0], mu[:,1], s=100, c=np.unique(y_pred),cmap=plt.cm.Paired)
# puntos predicados
plt.scatter(X1,X2, c=y_pred,cmap=plt.cm.Paired)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
grafica_KMeans(X1,X2,Y,clf)
# modelo
clf = cluster.KMeans(2)
# Dividir daots
X_train,X_test, Y_train, Y_test= cv.train_test_split(X,Y,test_size=0.90)
# entrenar y predecir
clf.fit(X_train,Y_train)
y_pred = clf.predict(X_test)
# precision y score
error= 1 - np.sum(np.abs(y_pred - Y_test))/float(len(Y_test))
score = clf.score(X_test,Y_test)
print("Precision es ",error)
print("Score es ",score)
X1 = X[:,0]
X2 = X[:,1]
grafica_KMeans(X1,X2,Y,clf)
# modelo
clf = cluster.KMeans(2)
# dividir
X_train,X_test, Y_train, Y_test= cv.train_test_split(X,Y,test_size=0.90)
# entrenar y predecir
clf.fit(X_train,Y_train)
y_pred = clf.predict(X_test)
# resultados
error= 1 - np.sum(np.abs(y_pred - Y_test))/float(len(Y_test))
resultados = cv.cross_val_score(clf,X,Y, cv=10)
print("Precision es ",error)
print("Score es ",score)
X1 = X[:,0]
X2 = X[:,1]
grafica_KMeans(X1,X2,Y,clf)
clf = svm.SVC(kernel='linear')
clf.fit(X,Y)
y_pred = clf.predict(X)
score = clf.score(X,Y)
print("Score es ",score)
ejeX = X[:, 0]
ejeY = X[:, 1]
plt.scatter(ejeX,ejeY, c=Y, zorder=10, cmap=plt.cm.Paired)
grafica_SVC(ejeX,ejeY,clf)
plt.title('Grafica de decision - Lineal')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
clf = svm.SVC(kernel='poly',degree=3)
clf.fit(X,Y)
y_pred = clf.predict(X)
score = clf.score(X,Y)
print("Score es ",score)
ejeX = X[:, 0]
ejeY = X[:, 1]
plt.scatter(ejeX,ejeY, c=Y, zorder=10, cmap=plt.cm.Paired)
grafica_SVC(ejeX,ejeY,clf)
plt.title('Grafica de decision - Poly')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
clf = svm.SVC(kernel='rbf')
clf.fit(X,Y)
y_pred = clf.predict(X)
score = clf.score(X,Y)
print("Score es ",score)
ejeX = X[:, 0]
ejeY = X[:, 1]
plt.scatter(ejeX,ejeY, c=Y, zorder=10, cmap=plt.cm.Paired)
grafica_SVC(ejeX,ejeY,clf)
plt.title('Grafica de decision - RBF')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
ks =[ 2,5,8,10,20,40,60,80,100]
error=[]
for k in ks:
kmeans = cluster.KMeans(k)
kmeans.fit(X)
error.append(kmeans.score(X,Y))
plt.plot(ks,error,'-o')
plt.xlabel('K-centros')
plt.ylabel('Error')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Librerias
Step2: El panorama!
Step3: Los datos como X y Y
Step4: A visualizar! (Solo para recordar como son los datos)
Step5: Algoritmo-del-ritmo #1
Step6: <i class="fa fa-list"></i> Ahora entra el ML!
Step7: Visualizar centros
Step8: Espacios de classificacion
Step9: Por que llega a 0? Que significa?
Step10: Nuevo espacio de classificacion
Step11: Una ultima herramienta
Step12: De nuevo
Step13: Modelo Polynomial
Step14: Modelo RBF
Step15: Actividad!
|
15,310 | <ASSISTANT_TASK:>
Python Code:
import graphlab
import numpy as np
def polynomial_sframe(feature, degree):
poly_dataset = graphlab.SFrame()
poly_dataset['power_1'] = feature
if degree > 1:
for power in range(2, degree + 1):
column = 'power_' + str(power)
poly_dataset[column] = feature**power
features = poly_dataset.column_names()
#poly_dataset['constant'] = 1
return (poly_dataset, features)
import matplotlib.pyplot as plt
%matplotlib inline
graphlab.product_key.set_product_key('8F02-101A-D0EE-7D97-76C4-662D-954D-9287')
graphlab.product_key.get_product_key()
sales = graphlab.SFrame('kc_house_data.gl/')
sales = sales.sort(['sqft_living','price'])
l2_small_penalty = 1.5e-5
import graphlab.numpy
poly_data, features = polynomial_sframe(sales['sqft_living'],15)
print(poly_data['power_1'].mean())
poly_data['price'] = sales['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_small_penalty)
print(model['coefficients']['value'])
plt.plot(poly_data['power_1'], sales['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
power_1_coefs = []
poly_data, features = polynomial_sframe(set_1['sqft_living'],15)
poly_data['price'] = set_1['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_small_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
poly_data, features = polynomial_sframe(set_2['sqft_living'],15)
poly_data['price'] = set_2['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_small_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
poly_data, features = polynomial_sframe(set_3['sqft_living'],15)
poly_data['price'] = set_3['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_small_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
poly_data, features = polynomial_sframe(set_4['sqft_living'],15)
poly_data['price'] = set_4['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_small_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
print(power_1_coefs)
print(power_1_coefs.index(min(power_1_coefs)))
print(power_1_coefs.index(max(power_1_coefs)))
power_1_coefs = []
l2_penalty=1e5
poly_data, features = polynomial_sframe(set_1['sqft_living'],15)
poly_data['price'] = set_1['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
poly_data, features = polynomial_sframe(set_2['sqft_living'],15)
poly_data['price'] = set_2['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
poly_data, features = polynomial_sframe(set_3['sqft_living'],15)
poly_data['price'] = set_3['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
poly_data, features = polynomial_sframe(set_4['sqft_living'],15)
poly_data['price'] = set_4['price']
model = graphlab.linear_regression.create(poly_data, target='price', features=features,
validation_set=None, l2_penalty=l2_penalty)
plt.plot(poly_data['power_1'], poly_data['price'], '.',
poly_data['power_1'], model.predict(poly_data), '-')
print(model['coefficients'])
power_1_coefs.append(model['coefficients']['value'][1])
print(power_1_coefs)
print(power_1_coefs.index(min(power_1_coefs)))
print(power_1_coefs.index(max(power_1_coefs)))
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
start4 = 5818
end4 = 7757
print(end4 - start4)
validation4 = train_valid_shuffled[start4:end4+1]
print int(round(validation4['price'].mean(), 0))
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
train4 = train_valid_shuffled[:start-1].append(train_valid_shuffled[end+2:])
print int(round(train4['price'].mean(), 0))
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
validation_errors_ = []
for i_ in range(k):
n_ = len(data)
start_ = (n_*i_)/k
end_ = (n_*(i_+1))/k-1
validation_set_ = data[start_:n_]
training_set_ = data[0:start_].append(data[end_ + 1:n_])
model_ = graphlab.linear_regression.create(training_set_, features=features_list,
target=output_name, l2_penalty=l2_penalty,
verbose=False)
predictons_ = model_.predict(validation_set_[features_list])
errors_ = predictons_ - validation_set_[output_name]
validation_errors_.append(errors_.to_numpy().T.dot(errors_))
return np.array(validation_errors_).mean()
print(np.logspace(1, 7, num=13))
import sys
validation_errors = []
lowest_error = sys.float_info.max
penalty = 0
data_poly, features = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
data_poly['price'] = train_valid_shuffled['price']
for l2_penalty in np.logspace(1, 7, num=13):
average_validation_error = k_fold_cross_validation(10, l2_penalty, data_poly, 'price', features)
print('Penalty: %s, error: %s' % (l2_penalty, average_validation_error))
if average_validation_error < lowest_error:
lowest_error = average_validation_error
penalty = l2_penalty
validation_errors.append(average_validation_error)
print('Lowest error is: %s for penalty: %s' % (lowest_error, penalty))
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.plot(np.logspace(1, 7, num=13), validation_errors, '-')
plt.xscale('log')
print(validation_errors)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Polynomial regression, revisited
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: QUIZ QUESTION
Step7: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Step8: The four curves should differ from one another a lot, as should the coefficients you learned.
Step9: Ridge regression comes to rescue
Step10: These curves should vary a lot less, now that you applied a high degree of regularization.
Step11: Selecting an L2 penalty via cross-validation
Step12: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
Step13: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step14: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step15: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step16: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
Step17: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step18: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
Step19: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following
Step20: QUIZ QUESTIONS
Step21: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
|
15,311 | <ASSISTANT_TASK:>
Python Code:
# Import the tardis widgets module
import tardis.widgets as tw
# Create a Simulation object by running tardis
from tardis import run_tardis
sim = run_tardis('tardis_example.yml')
# Now use it to create a shell info widget
shell_info = tw.shell_info_from_simulation(sim)
# Call display method of shell_info
shell_info.display()
# Use a tardis simulation saved as HDF file to create shell info widget
shell_info = tw.shell_info_from_hdf('/tmp/sim_example.hdf')
# Display it
shell_info.display()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Shell Info
Step2: You can interact with the widget produced in output above (which may not be visible) like this
|
15,312 | <ASSISTANT_TASK:>
Python Code:
# import software packages
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
inline_rc = dict(mpl.rcParams)
# enter column labels and raw data (with same # of values)
table1 = pd.DataFrame.from_items([
('column1', [0,1,2,3]),
('column2', [0,2,4,6])
])
# display data table
table1
# Uncomment the next line to make your graphs look like xkcd.com
#plt.xkcd()
# to make normal-looking plots again execute:
#mpl.rcParams.update(inline_rc)
# set variables = data['column label']
x = table1['column1']
y = table1['column2']
# this makes a scatterplot of the data
# plt.scatter(x values, y values)
plt.scatter(x, y)
plt.title("?")
plt.xlabel("?")
plt.ylabel("?")
plt.autoscale(tight=True)
# calculate a trendline equation
# np.polyfit( x values, y values, polynomial order)
trend1 = np.polyfit(x, y, 1)
# plot trendline
# plt.plot(x values, y values, other parameters)
plt.plot(x, np.poly1d(trend1)(x), label='trendline')
plt.legend(loc='upper left')
# display the trendline's coefficients (slope, y-int)
trend1
# create a new empty column
table1['column3'] = ''
table1
# np.diff() calculates the difference between a value and the one after it
z = np.diff(x)
# fill column 3 with values from the formula (z) above:
table1['column3'] = pd.DataFrame.from_items([('', z)])
# display the data table
table1
# NaN and Inf values cause problems with math and plotting.
# Make a new table using only selected rows and columns
table2 = table1.loc[0:2,['column1', 'column2', 'column3']] # this keeps rows 0 through 2
table2
# set new variables to plot
x2 = table2['column1']
y2 = table2['column3']
# code for plotting table2 can go here
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Raw data
Step2: Plotting
Step3: Do calculations with the data
Step4: Here's an example of calculating the difference between the values in column 2
Step5: Now you can copy the code above to plot your new data table.
|
15,313 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(shape=(None,real_dim),dtype=tf.float32,name='input_real')
inputs_z = tf.placeholder(shape=(None,z_dim),dtype=tf.float32,name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator',reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1 * alpha, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator',reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x , n_units, activation = None)
# Leaky ReLU
h1 = tf.maximum(h1 * alpha, h1)
logits = tf.layers.dense(h1, 1, activation = None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size, reuse = False, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse = True, alpha=alpha)
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_fake) ))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [variable for variable in t_vars if 'generator' in variable.name]
d_vars = [variable for variable in t_vars if 'discriminator' in variable.name]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
15,314 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
x = np.arange(-np.pi,np.pi,0.01) # Create an array of x values from -pi to pi with 0.01 interval
y = np.sin(x) # Apply sin function on all x
plt.plot(x,y)
plt.plot(y)
x = np.arange(0,10,1) # x = 1,2,3,4,5...
y = x*x # Squared x
plt.plot(x,y,'bo') # plot x and y using blue circle markers
plt.plot(x,y,'r+') # plot x and y using red plusses
x = np.arange(-np.pi,np.pi,0.001)
plt.plot(x,np.sin(x))
plt.title('y = sin(x)') # title
plt.xlabel('x (radians)') # x-axis label
plt.ylabel('y') # y-axis label
# To plot the axis label in LaTex, we can run
from matplotlib import rc
## For sans-serif font:
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
rc('text', usetex=True)
## for Palatino and other serif fonts use:
#rc('font',**{'family':'serif','serif':['Palatino']})
plt.plot(x,np.sin(x))
plt.title(r'T = sin($\theta$)') # title, the `r` in front of the string means raw string
plt.xlabel(r'$\theta$ (radians)') # x-axis label, LaTex synatx should be encoded with $$
plt.ylabel('T') # y-axis label
x1 = np.linspace(0.0, 5.0)
x2 = np.linspace(0.0, 2.0)
y1 = np.cos(2 * np.pi * x1) * np.exp(-x1)
y2 = np.cos(2 * np.pi * x2)
plt.subplot(2, 1, 1)
plt.plot(x1, y1, '.-')
plt.title('Plot 2 graph at the same time')
plt.ylabel('Amplitude (Damped)')
plt.subplot(2, 1, 2)
plt.plot(x2, y2, '.-')
plt.xlabel('time (s)')
plt.ylabel('Amplitude (Undamped)')
plt.plot(x,np.sin(x))
plt.savefig('plot.pdf')
plt.savefig('plot.png')
# To load image into this Jupyter notebook
from IPython.display import Image
Image("plot.png")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: File Reading
Step2: Scatter Plots
Step3: Plot properties
Step4: Multiple plots
Step5: Save figure
|
15,315 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
# Read CSV data into df
df = pd.read_csv('./theAwesome_EnsModel.csv')
# delete id column no need
df.drop('Id',axis=1,inplace=True)
df.head()
# Learn the unique values in diagnosis column
print("Classification labels: ", df.Species.unique() )
# Mapping labels to numerical labels?
df.Species = df.Species.map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2})
df.info()
df.describe()
df.Species.value_counts().plot(kind='pie')
plt.show()
df.Species.value_counts(df.Species.unique()[0])
# Splitting train and test data
# .7 and .3
import numpy as np
msk = np.random.rand(len(df)) < 0.7
train_df = df[msk]
test_df = df[~msk]
train_df.describe()
test_df.describe()
df.describe()
print(train_df["Species"].value_counts(train_df["Species"].unique()[0]))
print(len(train_df))
train_df.head()
print(test_df["Species"].value_counts(test_df["Species"].unique()[0]))
print(len(test_df))
test_df.head()
# I am going to apply min-max scaling for my data.
from sklearn import preprocessing
# Fitting the minmax scaled version for training data
minmax_scale = preprocessing.MinMaxScaler().fit(train_df.iloc[:, :4])
# Now actually scale train and test data
train_df.iloc[:, :4] = minmax_scale.transform(train_df.iloc[:, :4])
test_df.iloc[:, :4] = minmax_scale.transform(test_df.iloc[:, :4])
train_df.describe()
test_df.describe()
# Input and Output
inp_train = train_df.iloc[:, :4]
out_train = train_df["Species"]
inp_test = test_df.iloc[:, :4]
out_test = test_df["Species"]
from sklearn.cross_validation import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
# Building a RandomForest with 10 estimators
clf = RandomForestClassifier(n_estimators=10)
# clf = clf.fit(inp_train, out_train)
# 10-fold Cross validation
print("Average of 10 cross validation: ", np.mean(cross_val_score(clf, inp_train, out_train, cv=10)))
param_grid = {
'n_estimators': [5, 10, 15, 20],
'max_depth': [2, 5, 7, 9]
}
grid_clf = GridSearchCV(clf, param_grid, cv=10)
grid_clf.fit(inp_train, out_train)
print(grid_clf)
print(grid_clf.best_estimator_)
print(grid_clf.best_params_)
print(grid_clf.best_score_)
# Using optimized parameterss to train my data
# Optimized parameters:
clf = RandomForestClassifier(max_depth=5, n_estimators=5, random_state=None)
clf.fit(inp_train, out_train)
print("Average of 10 cross validation of optimized estimetor: ",
np.mean(cross_val_score(clf, inp_train, out_train, cv=5)))
# importing libraries for plotting
# Importing library for confusion matrix
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
sns.set(style='darkgrid')
# train prediction for train data
out_train_pred = clf.predict(inp_train)
# Compute confusion matrix for prediction of train
cm = confusion_matrix(out_train, out_train_pred)
print(cm)
sns.heatmap(cm, center=True)
plt.title('Confusion matrix')
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# test prediction for test data
out_test_pred = clf.predict(inp_test)
# Compute confusion matrix for prediction of train
cm = confusion_matrix(out_test, out_test_pred)
print(cm)
sns.heatmap(cm, center=True)
plt.title('Confusion matrix')
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# I would like to use ROC
# Area under ROC Curve (or AUC for short) is
# a performance metric for binary classification problems.
from sklearn.metrics import roc_curve
# ROC curve for train data
fpr,tpr,thresholds = roc_curve(out_train, out_train_pred,pos_label=2)
# plot the curve
plt.plot(fpr, tpr, label="Train Data")
# ROC curve for test data
fpr, tpr, thresholds = roc_curve(out_test, out_test_pred, pos_label=2)
# Plotting the curves
plt.plot(fpr, tpr, label="Test Data")
plt.xlim([-0.05,1.05])
plt.ylim([-0.05,1.05])
plt.title('ROC curve for Cancer classifer')
plt.xlabel('False positive rate (1-specificity)')
plt.ylabel('True positive rate (sensitivity)')
plt.legend(loc=4,)
plt.show()
# Using default values for RandomForeset Classifier
# Building a RandomForest
clf = RandomForestClassifier()
clf = clf.fit(inp_train, out_train)
# train prediction for train data
out_train_pred = clf.predict(inp_train)
# Compute confusion matrix for prediction of train
cm = confusion_matrix(out_train, out_train_pred)
print(cm)
# test prediction for test data
out_test_pred = clf.predict(inp_test)
# Compute confusion matrix for prediction of train
cm = confusion_matrix(out_test, out_test_pred)
print(cm)
# Model trained with default values
# ROC curve for train data
fpr,tpr,thresholds = roc_curve(out_train, out_train_pred,pos_label=2)
# plot the curve
plt.plot(fpr, tpr, label="Train Data")
# ROC curve for test data
fpr, tpr, thresholds = roc_curve(out_test, out_test_pred, pos_label=2)
# Plotting the curves
plt.plot(fpr, tpr, label="Test Data")
plt.xlim([-0.05,1.05])
plt.ylim([-0.05,1.05])
plt.title('ROC curve for Cancer classifer')
plt.xlabel('False positive rate (1-specificity)')
plt.ylabel('True positive rate (sensitivity)')
plt.legend(loc=4,)
plt.show()
# Let's check f1 score on our classification
from sklearn.metrics import f1_score
print("f1_score: ", f1_score(out_test, out_test_pred, average=None))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: They are pretty similar to full data when I print the descriptive stats for test and train data.
Step 3
Step4: My target/label column is nominal categorical data. This data will be used for multi-class classification. When I am splitting the test and train data, I was careful to get the similar ratio of the labels for each...
Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 8
Step9: Train data is performing slightly better than the test data however, I believe it is overfitting, as you can see in the Train data ROC curve, it started very high and hit 1.0 sooner.
Step 9
Step10: As you can see from the graph train data is clearly overfitted, which means random forest with default parameters did not work for this dataset. Test data is still learning well but model is not really working with default parameters.
Step 10
|
15,316 | <ASSISTANT_TASK:>
Python Code:
sample_input_arr = np.array([5,10,2,4,3,2,1],dtype=np.int32)
f = np.savetxt("sample_input.txt", sample_input_arr, fmt='%i',delimiter="\n")
N_H = 10 # <= 10000
C_max = 5 # <= 1000
c_low = 0
c_high = 10
filename = "sample_input_1.txt"
homes = np.random.randint(low=c_low,high=c_high, size=N_H)
input_arr = np.insert(homes,0,C_max,axis=0)
input_arr = np.insert(input_arr,0,N_H,axis=0)
np.savetxt(filename, input_arr , fmt='%i', delimiter="\n")
N_H = 500 # <= 10000
C_max = 1000 # <= 1000
c_low = 0
c_high = 1000
filename = "sample_input_2.txt"
homes = np.random.randint(low=c_low,high=c_high, size=N_H)
input_arr = np.insert(homes,0,C_max,axis=0)
input_arr = np.insert(input_arr,0,N_H,axis=0)
np.savetxt(filename, input_arr , fmt='%i', delimiter="\n")
N_H = 8000 # <= 10000
C_max = 800 # <= 1000
c_low = 0
c_high = 1000
filename = "sample_input_3.txt"
homes = np.random.randint(low=c_low,high=c_high, size=N_H)
input_arr1 = np.insert(homes,0,C_max,axis=0)
input_arr1 = np.insert(input_arr1,0,N_H,axis=0)
np.savetxt(filename, input_arr1 , fmt='%i', delimiter="\n")
N_H = 8000 # <= 10000
C_max = 800 # <= 1000
c_low = 0
c_high = 100
filename = "sample_input_4.txt"
homes = np.random.randint(low=c_low,high=c_high, size=N_H)
input_arr2 = np.insert(homes,0,C_max,axis=0)
input_arr2 = np.insert(input_arr2,0,N_H,axis=0)
np.savetxt(filename, input_arr2 , fmt='%i', delimiter="\n")
case0_input_arr = np.arange(10,16)
case0_input_arr = np.insert( case0_input_arr,0,case0_input_arr.size-1,axis=0)
np.savetxt("case0_input.txt", case0_input_arr , fmt='%i', delimiter="\n")
def main_loop_draft(input_arr):
N_H = input_arr[0]
C_max = input_arr[1]
homes_arr = input_arr[2:]
result = np.zeros(3,dtype=int)
for h_0 in range(1, N_H +1):
for h_1 in range(h_0, N_H +1):
c_sum = homes_arr[h_0-1:h_1].sum() # be aware of 0-based counting, i.e. counting from 0, of Python and C/C++
if (c_sum > C_max):
break
elif (c_sum == C_max): # obtained (abs.) max. pieces of candy allowed
if (c_sum > result[2]):
result[0] = h_0
result[1] = h_1
result[2] = c_sum
break;
elif (c_sum < C_max):
if (c_sum > result[2]):
result[0] = h_0
result[1] = h_1
result[2] = c_sum
if (result[2] == C_max): # obtained both (abs.) max pieces of candy allowed and lowest numbered 1st home
break
return result
def main_loop(input_arr):
N_H = input_arr[0]
C_max = input_arr[1]
homes_arr = input_arr[2:]
result = np.zeros(3,dtype=int)
for h_0 in range(1, N_H +1):
c_sum = homes_arr[h_0-1] # be aware of 0-based counting, i.e. counting from 0, of Python and C/C++
if (c_sum > C_max):
continue
elif (c_sum == C_max): # obtained (abs.) max. pieces of candy allowed
if (c_sum > result[2]):
result[0] = h_0
result[1] = h_0
result[2] = c_sum
break
elif (c_sum < C_max):
if (c_sum > result[2]):
result[0] = h_0
result[1] = h_0
result[2] = c_sum
for h_1 in range(h_0+1, N_H +1):
c_sum += homes_arr[h_1-1]
if (c_sum > C_max):
break
elif (c_sum == C_max): # obtained (abs.) max. pieces of candy allowed
if (c_sum > result[2]):
result[0] = h_0
result[1] = h_1
result[2] = c_sum
break
elif (c_sum < C_max):
if (c_sum > result[2]):
result[0] = h_0
result[1] = h_1
result[2] = c_sum
if (result[2] == C_max): # obtained both (abs.) max pieces of candy allowed and lowest numbered 1st home
break
return result
result_example = main_loop(input_arr)
print(result_example, input_arr[2:][result_example[0]-1:result_example[1]] )
%time result_example1 = main_loop(input_arr1)
print(result_example1, input_arr1[2:][result_example1[0]-1:result_example1[1]] )
%time result_example2 = main_loop(input_arr2)
print(result_example2, input_arr2[2:][result_example2[0]-1:result_example2[1]] )
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make an artificial case where we expect to not go to any of the houses (too much candy from each home, as each home gives more than the maximum alloted pieces of candy)
Step2: Mathematical explanation and Python version as sanity check
Step3: We really shouldn't need to do the summation each time. Indeed, notice the relationship
|
15,317 | <ASSISTANT_TASK:>
Python Code:
import SimpleITK as sitk
# Utility method that either downloads data from the network or
# if already downloaded returns the file name for reading from disk (cached data).
from downloaddata import fetch_data as fdata
# Always write output to a separate directory, we don't want to pollute the source directory.
OUTPUT_DIR = 'Output'
import registration_callbacks as rc
import registration_utilities as ru
%matplotlib inline
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32)
moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32)
fixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth(fdata("ct_T1.standard"))
# Estimate the reference_transform defined by the RIRE fiducials and check that the FRE makes sense (low)
R, t = ru.absolute_orientation_m(fixed_fiducial_points, moving_fiducial_points)
reference_transform = sitk.Euler3DTransform()
reference_transform.SetMatrix(R.flatten())
reference_transform.SetTranslation(t)
reference_errors_mean, reference_errors_std, _, reference_errors_max,_ = ru.registration_errors(reference_transform, fixed_fiducial_points, moving_fiducial_points)
print('Reference data errors (FRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(reference_errors_mean, reference_errors_std, reference_errors_max))
# Generate a reference dataset from the reference transformation
# (corresponding points in the fixed and moving images).
fixed_points = ru.generate_random_pointset(image=fixed_image, num_points=100)
moving_points = [reference_transform.TransformPoint(p) for p in fixed_points]
# Compute the TRE prior to registration.
pre_errors_mean, pre_errors_std, pre_errors_min, pre_errors_max, _ = ru.registration_errors(sitk.Euler3DTransform(), fixed_points, moving_points, display_errors = True)
print('Before registration, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(pre_errors_mean, pre_errors_std, pre_errors_max))
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelIDValue()),
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY)
initial_errors_mean, initial_errors_std, initial_errors_min, initial_errors_max, _ = ru.registration_errors(initial_transform, fixed_points, moving_points, min_err=pre_errors_min, max_err=pre_errors_max, display_errors=True)
print('After initialization, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max))
#%%timeit -r1 -n1
# to time this cell uncomment the line above
#the arguments to the timeit magic specify that this cell should only be run once. running it multiple
#times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy
#results from multiple runs you will have to modify the code to save them instead of just printing them out.
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkNearestNeighbor) #2. Replace with sitkLinear
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) #1. Increase to 1000
registration_method.SetOptimizerScalesFromPhysicalShift()
# Don't optimize in-place, we would like to run this cell multiple times
registration_method.SetInitialTransform(initial_transform, inPlace=False)
# Add callbacks which will display the similarity measure value and the reference data during the registration process
registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points))
final_transform_single_scale = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, min_err=initial_errors_min, max_err=initial_errors_max, display_errors=True)
print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, display_errors=True)
%%timeit -r1 -n1
#the arguments to the timeit magic specify that this cell should only be run once. running it multiple
#times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy
#results from multiple runs you will have to modify the code to save them instead of just printing them out.
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.1)
registration_method.SetInterpolator(sitk.sitkLinear) #2. Replace with sitkLinear
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Don't optimize in-place, we would like to run this cell multiple times
registration_method.SetInitialTransform(initial_transform, inPlace=False)
# Add callbacks which will display the similarity measure value and the reference data during the registration process
registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points))
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
final_transform = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, fixed_points, moving_points, True)
print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
# Threshold the original fixed, CT, image at 0HU (water), resulting in a binary labeled [0,1] image.
roi = fixed_image> 0
# Our ROI consists of all voxels with a value of 1, now get the bounding box surrounding the head.
label_shape_analysis = sitk.LabelShapeStatisticsImageFilter()
label_shape_analysis.SetBackgroundValue(0)
label_shape_analysis.Execute(roi)
bounding_box = label_shape_analysis.GetBoundingBox(1)
# Bounding box in physical space.
sub_image_min = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0],bounding_box[1], bounding_box[2]))
sub_image_max = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0]+bounding_box[3]-1,
bounding_box[1]+bounding_box[4]-1,
bounding_box[2]+bounding_box[5]-1))
# Only look at the points inside our bounding box.
sub_fixed_points = []
sub_moving_points = []
for fixed_pnt, moving_pnt in zip(fixed_points, moving_points):
if sub_image_min[0]<=fixed_pnt[0]<=sub_image_max[0] and \
sub_image_min[1]<=fixed_pnt[1]<=sub_image_max[1] and \
sub_image_min[2]<=fixed_pnt[2]<=sub_image_max[2] :
sub_fixed_points.append(fixed_pnt)
sub_moving_points.append(moving_pnt)
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, sub_fixed_points, sub_moving_points, True)
print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the RIRE data and generate a larger point set as a reference
Step2: Initial Alignment
Step3: Registration
Step4: In some cases visual comparison of the registration errors using the same scale is not informative, as seen above [all points are grey/black]. We therefor set the color scale to the min-max error range found in the current data and not the range from the previous stage.
Step5: Now using the built in multi-resolution framework
Step6: Sufficient accuracy <u>inside</u> the ROI
|
15,318 | <ASSISTANT_TASK:>
Python Code:
##### Import all the necessary Libraries
from yellowbrick.text import TSNEVisualizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from yellowbrick.text import UMAPVisualizer
from yellowbrick.datasets import load_hobbies
corpus = load_hobbies()
def visualize(dim_reduction,encoding,corpus,labels = True,alpha=0.7,metric=None):
if 'tfidf' in encoding.lower():
encode = TfidfVectorizer()
if 'count' in encoding.lower():
encode = CountVectorizer()
docs = encode.fit_transform(corpus.data)
if labels is True:
labels = corpus.target
else:
labels = None
if 'umap' in dim_reduction.lower():
if metric is None:
viz = UMAPVisualizer()
else:
viz = UMAPVisualizer(metric=metric)
if 't-sne' in dim_reduction.lower():
viz = TSNEVisualizer(alpha = alpha)
viz.fit(docs,labels)
viz.show()
visualize('t-sne','tfidf',corpus)
visualize('t-sne','count',corpus,alpha = 0.5)
visualize('t-sne','tfidf',corpus,labels =False)
visualize('umap','tfidf',corpus)
visualize('umap','tfidf',corpus,labels = False)
visualize('umap','count',corpus,metric= 'cosine')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: UMAP vs T-SNE
Step2: Writing a Function to quickly Visualize Corpus
Step3: Quickly Comparing Plots by Controlling
|
15,319 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
15,320 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
from IPython.html import widgets
def print_sum(a, b):
c = a + b
print c
interact(print_sum, a = (-10., 10., 0.1), b = (-8, 8, 2))
assert True # leave this for grading the print_sum exercise
def print_string(s, length=False):
print 's'
if length == True:
print len(s)
interact(print_string, s = 'Hello World!', length = True)
assert True # leave this for grading the print_string exercise
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interact basics
Step2: Use the interact function to interact with the print_sum function.
Step3: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
Step4: Use the interact function to interact with the print_string function.
|
15,321 | <ASSISTANT_TASK:>
Python Code:
import io, os, sys, types
import nbformat
from IPython import get_ipython
from IPython.core.interactiveshell import InteractiveShell
def find_notebook(fullname, path=None):
find a notebook, given its fully qualified name and an optional path
This turns "foo.bar" into "foo/bar.ipynb"
and tries turning "Foo_Bar" into "Foo Bar" if Foo_Bar
does not exist.
name = fullname.rsplit('.', 1)[-1]
if not path:
path = ['']
for d in path:
nb_path = os.path.join(d, name + ".ipynb")
if os.path.isfile(nb_path):
return nb_path
# let import Notebook_Name find "Notebook Name.ipynb"
nb_path = nb_path.replace("_", " ")
if os.path.isfile(nb_path):
return nb_path
class NotebookLoader(object):
Module Loader for IPython Notebooks
def __init__(self, path=None):
self.shell = InteractiveShell.instance()
self.path = path
def load_module(self, fullname):
import a notebook as a module
path = find_notebook(fullname, self.path)
print ("importing notebook from %s" % path)
# load the notebook object
nb = nbformat.read(path, as_version=4)
# create the module and add it to sys.modules
# if name in sys.modules:
# return sys.modules[name]
mod = types.ModuleType(fullname)
mod.__file__ = path
mod.__loader__ = self
mod.__dict__['get_ipython'] = get_ipython
sys.modules[fullname] = mod
# extra work to ensure that magics that would affect the user_ns
# actually affect the notebook module's ns
save_user_ns = self.shell.user_ns
self.shell.user_ns = mod.__dict__
try:
for cell in nb.cells:
if cell.cell_type == 'code':
# transform the input to executable Python
code = self.shell.input_transformer_manager.transform_cell(cell.source)
# run the code in themodule
exec(code, mod.__dict__)
finally:
self.shell.user_ns = save_user_ns
return mod
class NotebookFinder(object):
Module finder that locates IPython Notebooks
def __init__(self):
self.loaders = {}
def find_module(self, fullname, path=None):
nb_path = find_notebook(fullname, path)
if not nb_path:
return
key = path
if path:
# lists aren't hashable
key = os.path.sep.join(path)
if key not in self.loaders:
self.loaders[key] = NotebookLoader(path)
return self.loaders[key]
sys.meta_path.append(NotebookFinder())
ls nbpackage
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
from IPython.display import display, HTML
formatter = HtmlFormatter()
lexer = PythonLexer()
# publish the CSS for pygments highlighting
display(HTML(
<style type='text/css'>
%s
</style>
% formatter.get_style_defs()
))
def show_notebook(fname):
display a short summary of the cells of a notebook
nb = nbformat.read(fname, as_version=4)
html = []
for cell in nb.cells:
html.append("<h4>%s cell</h4>" % cell.cell_type)
if cell.cell_type == 'code':
html.append(highlight(cell.source, lexer, formatter))
else:
html.append("<pre>%s</pre>" % cell.source)
display(HTML('\n'.join(html)))
show_notebook(os.path.join("nbpackage", "mynotebook.ipynb"))
from nbpackage import mynotebook
mynotebook.foo()
mynotebook.has_ip_syntax()
ls nbpackage/nbs
show_notebook(os.path.join("nbpackage", "nbs", "other.ipynb"))
from nbpackage.nbs import other
other.bar(5)
import shutil
from IPython.paths import get_ipython_package_dir
utils = os.path.join(get_ipython_package_dir(), 'utils')
shutil.copy(os.path.join("nbpackage", "mynotebook.ipynb"),
os.path.join(utils, "inside_ipython.ipynb")
)
from IPython.utils import inside_ipython
inside_ipython.whatsmyname()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Import hooks typically take the form of two objects
Step5: Notebook Loader
Step7: The Module Finder
Step8: Register the hook
Step9: After this point, my notebooks should be importable.
Step12: So I should be able to import nbimp.mynotebook.
Step13: So my notebook has a heading cell and some code cells,
Step14: Hooray, it imported! Does it work?
Step15: Hooray again!
Step16: Notebooks in packages
Step17: Note that the __init__.py is necessary for nb to be considered a package,
Step18: So now we have importable notebooks, from both the local directory and inside packages.
Step19: and import the notebook from IPython.utils
|
15,322 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int.get(wordItor,source_vocab_to_int['<UNK>'])
for wordItor in lineItor.split(' ')]
for lineItor in source_text.split('\n')]
target_id_text = [[target_vocab_to_int.get(wordItor,target_vocab_to_int['<UNK>'])
for wordItor in lineItor.split(' ')]+[target_vocab_to_int['<EOS>'] ]
for lineItor in target_text.split('\n')]
return (source_id_text, target_id_text)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32,[None,None],name='input')
targets = tf.placeholder(tf.int32,[None,None])
learning_rate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32,(None,),name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length,name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32,(None,),name='source_sequence_length')
return inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
tail_cut_target_data = tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1])
decoder_input = tf.concat([tf.fill([batch_size,1],target_vocab_to_int['<GO>']), tail_cut_target_data],1)
return decoder_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed_inputs = tf.contrib.layers.embed_sequence(rnn_inputs,source_vocab_size,encoding_embedding_size)
def get_rnncell_with_drop():
rnncell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return tf.contrib.rnn.DropoutWrapper(rnncell,output_keep_prob=keep_prob)
rnn_net = tf.contrib.rnn.MultiRNNCell([get_rnncell_with_drop() for _ in range(num_layers)])
outputs, states = tf.nn.dynamic_rnn(rnn_net,embed_inputs, sequence_length=source_sequence_length,dtype=tf.float32)
return outputs, states
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper,
encoder_state, output_layer)
training_decoder_output,_ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True,
maximum_iterations=max_summary_length)
return training_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size])
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,
end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,inference_helper,
encoder_state,output_layer)
inference_decoder_output,_ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,impute_finished=True,
maximum_iterations=max_target_sequence_length)
return inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def getdec_cell():
rnncell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return tf.contrib.rnn.DropoutWrapper(rnncell,output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([getdec_cell() for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
_, enc_state = encoding_layer(input_data,rnn_size, num_layers,keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_decoder_output, inference_decoder_output = decoding_layer(dec_input,enc_state,target_sequence_length,
max_target_sentence_length, rnn_size,num_layers,target_vocab_to_int,
target_vocab_size, batch_size, keep_prob,dec_embedding_size)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 200
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.002
# Dropout Keep Probability
keep_probability = 0.8
display_step = 100
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence_lowcase = sentence.lower()
return [vocab_to_int.get(word,vocab_to_int['<UNK>']) for word in sentence_lowcase.split(' ')]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
15,323 | <ASSISTANT_TASK:>
Python Code:
%%html
<div >
<iframe type="text/html" width="336" height="550" frameborder="0" allowfullscreen style="max-width:100%;float: left" src="https://lesen.amazon.de/kp/card?asin=B003UV8TC2&preview=inline&linkCode=kpe&ref_=cm_sw_r_kb_dp_MamPyb1NWT7A8" ></iframe>
</div>
<div >
<iframe type="text/html" width="336" height="550" frameborder="0" allowfullscreen style="max-width:100%;float: right" src="https://lesen.amazon.de/kp/card?asin=B00J97FFRI&preview=inline&linkCode=kpe&ref_=cm_sw_r_kb_dp_PfmPyb5ZV4AP8" ></iframe>
</div>
fig = plt.figure()
plt.close(fig)
def oneCell(r,d,max_x):
clear_output(wait=True)
t_f = 10
dt = 0.1
def int_(t,x):
dev = x*(r-d)
if max_x != None:
dev *= (1-x/max_x)
#print("dev",dev,x)
return dev
integ = integrate.ode(int_)
y = np.zeros(int(t_f/dt)+1)
x = np.zeros(int(t_f/dt)+1)
xdot = np.zeros(int(t_f/dt)+1)
integ.set_integrator("dopri5").set_initial_value(0.01)
i = 0
while integ.successful() and integ.t<t_f:
y[i] = integ.y
x[i] = integ.t
xdot[i] = int_(integ.t,y[i])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
ax.plot(x,y,label="population size")
ax.set_ylim(-0.6,3.0)
ax.set_xlabel("time")
ax.set_ylabel("population size")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=1):
ax2.plot(x, xdot, label="derivative",linestyle='--')
ax2.set_ylabel('$\dot{x}$', rotation=0)
ax2.grid('off')
ax.legend(loc=2)
ax2.legend()
ax2.set_ylim(0.,0.25)
display(fig)
return
items = [
widgets.FloatSlider(
value=1.5,
min=0,
max=2.0,
step=0.01,
description="r",layout=widgets.Layout(width='100%', height='80px'))
,widgets.FloatSlider(
value=.0,
min=0,
max=2.0,
step=0.01,
description="d",layout=widgets.Layout(width='100%', height='80px'))]
max_k = [widgets.FloatSlider(
value=1.5,
min=1,
max=2.0,
step=0.01,
description="K",layout=widgets.Layout(width='100%', height='80px')),
widgets.Checkbox(
value=False,
description="enforce K",layout=widgets.Layout(width='100%', height='80px'))]
def call_back_r(v):
if max_k[1].value is False:
return oneCell(items[0].value,items[1].value,None)
else:
return oneCell(items[0].value,items[1].value,max_k[0].value)
box_h = widgets.VBox(items,layout=widgets.Layout(width='100%', height='80px'))
box_h_max = widgets.VBox(items,layout=widgets.Layout(width='100%', height='80px'))
box = widgets.VBox([box_h]+[widgets.HBox(max_k)])
items[0].observe(call_back_r,names='value')
items[1].observe(call_back_r,names='value')
max_k[0].observe(call_back_r,names='value')
max_k[1].observe(call_back_r,names='value')
display(box)
fig = plt.figure()
plt.close(fig)
def twoCell(init_,rate):
clear_output(wait=True)
t_f = 10
dt = 0.1
update_rate = np.asarray(rate)
def int_(t,x):
dev = x.T.dot(update_rate)-x
return dev
integ = integrate.ode(int_)
y = np.zeros((int(t_f/dt)+1,update_rate.shape[0]))
x = np.zeros((int(t_f/dt)+1,update_rate.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,update_rate.shape[0]))
integ.set_integrator("dopri5").set_initial_value(np.asarray(init_))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=x.shape[1]):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label="type "+str(ind_ +1))
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("population ratio")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=x.shape[1]):
for ind_ in range(x.shape[1]):
ax2.plot(x[:,ind_], xdot[:,ind_], label="d type "+str(ind_ +1),linestyle='--')
ax2.set_ylabel('$\dot{x}$', rotation=0)
ax2.grid('off')
ax.legend(ncol=x.shape[1])
ax2.legend(loc=4,ncol=x.shape[1])
display(fig)
return
items_mute = [
widgets.IntText(
value=2,
min=2,
max=5.0,
description="r",layout=widgets.Layout(width='50%', height='80px'))
,widgets.Button(
description="submit")]
def updateplot(v,objects,status_label):
init = []
rates = []
for ind_,obj in enumerate(objects):
if ind_ < len(objects)-1:
init.append(obj[0].value)
else:
if sum(init)>1:
status_label.value = "Initial rates should sum to <1"
return
else:
status_label.value = ""
init.append(1-sum(init))
rate_ = []
for j in range(1,len(objects)):
rate_.append(obj[j].value)
if sum(rate_)>1:
status_label.value = "sum of mutation rates should sum to <1"
return
else:
status_label.value = ""
rate_.append(1-sum(rate_))
rates.append(rate_)
init = np.asarray(init)
rates = np.asarray(rates)
twoCell(init,rates)
return
def call_back_mute(count,objects,status_label,updateplot):
dsps = []
for i in range(count):
if i < count-1:
specie = [widgets.FloatSlider(
value=1.0/count,
min=0,
max=1.0,
step=0.01,
description="init "+str(i+1),layout=widgets.Layout(width='100%', height='80px'))]
else:
specie = [widgets.Label(layout=widgets.Layout(width='100%', height='80px'))]
for j in range(count-1):
wid = widgets.FloatSlider(
value=1 if j == i else 0,
min=0,
max=1.0,
step=0.01,
description="rate_"+str(i+1)+"_"+str(j+1),layout=widgets.Layout(width='100%', height='80px'))
wid.observe(updateplot,names='value')
specie.append(wid)
specie[0].observe(updateplot,names='value')
box_h = widgets.HBox(specie,layout=widgets.Layout(width='100%', height='80px'))
objects.append(specie)
dsps.append(box_h)
status_label = widgets.Label()
box_v = widgets.VBox(dsps+[status_label],layout=widgets.Layout(width='100%', height='80px'))
display(box_v)
updateplot("")
return objects
#items_mute[1].on_click(call_back_mute)
#box_h = widgets.HBox(items_mute,layout=widgets.Layout(width='100%', height='80px'))
#display(box_h)
objects = []
status_label = widgets.Label()
_ = call_back_mute(2,objects,status_label,lambda x:updateplot(x,objects,status_label))
objects_1 = []
status_label_1 = widgets.Label()
_ = call_back_mute(3,objects_1,status_label_1,lambda x:updateplot(x,objects_1,status_label_1))
fig = plt.figure()
plt.close(fig)
def genomeSequence(N,drich_alpha,point_mut):
np.random.seed(0)
clear_output(wait=True)
if point_mut is not None:
L,u = point_mut
t_f = 10
dt = 0.1
x_ = np.random.uniform(size=(N))
x_ = x_/x_.sum()
f = np.random.lognormal(size=(N))
if drich_alpha is not None:
Q = np.zeros((N,N))
for j in range(N):
Q[j,:] = np.random.dirichlet(np.roll(np.logspace(1,drich_alpha+1,N)[::-1], j), 1)
elif point_mut is not None:
Q = np.zeros((N,N))
for j in range(N):
for i in range(N):
Q[j,i] = (u**(np.abs(j-i)))*((1-u)**(L-np.abs(j-i)))
else:
print("One of the two arguments should not be None")
return
def int_(t,x):
x = np.asarray(x).reshape((x.shape[0],1))
dev = np.zeros(x.shape[0])
mean = f.dot(x)
for i in range(x.shape[0]):
for j in range(x.shape[0]):
dev[i] += f[j]*Q[j,i]*x[j]
dev[i] -= mean*x[i]
return dev
integ = integrate.ode(int_)
integ.set_integrator("dopri5").set_initial_value(np.asarray(x_))
y = np.zeros((int(t_f/dt)+1,x_.shape[0]))
x = np.zeros((int(t_f/dt)+1,x_.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,x_.shape[0]))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=2):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label=("$f_%d$: %.2f" % (ind_ +1,f[ind_])))
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("Quasi specie")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=2):
ax2.plot(np.arange(0,t_f+dt,dt),y.dot(f), label="fitness ",linestyle='-.')
ax2.set_ylabel('$f$', rotation=0)
ax2.set_ylim(0,3)
ax2.grid('off')
ax.legend(ncol=min(4,x.shape[1]))
ax2.legend(loc=4)
display(fig)
return
items_gene = [
widgets.IntSlider(
value=2,
min=2,
max=6,
description="# Genomes",layout=widgets.Layout(width='80%', height='300px')),
widgets.IntSlider(
value=10,
min=7,
max=15,
description="Max Length",layout=widgets.Layout(width='80%', height='230px')),
widgets.FloatSlider(
value=0.1,
min=0.01,
max=0.3,
step=0.05,
description="u",layout=widgets.Layout(width='80%', height='100px'))]
def _GeneCall(v):
return genomeSequence(items_gene[0].value,None,(items_gene[1].value,items_gene[2].value))
box_h = widgets.VBox(items_gene,layout=widgets.Layout(width='100%', height='80px'))
items_gene[0].observe(_GeneCall,names='value')
items_gene[1].observe(_GeneCall,names='value')
items_gene[2].observe(_GeneCall,names='value')
display(box_h)
_GeneCall(0)
fig = plt.figure()
plt.close(fig)
def genomeSequenceQ(f_0,u,L):
np.random.seed(0)
clear_output(wait=True)
t_f = 10
dt = 0.1
x_ = np.random.uniform(size=2)
x_ = x_/x_.sum()
f = np.array([f_0,1])
q = (1-u)**L
def int_(t,x):
mean = f[0]*x[0]+f[1]*x[1]
dev = np.zeros(x.shape[0])
dev[0] = x[0]*(f[0]*q - mean)
dev[1] = x[0]*f[0]*(1-q)+x[1] - mean*x[1]
return dev
integ = integrate.ode(int_)
integ.set_integrator("dopri5").set_initial_value(np.asarray(x_))
y = np.zeros((int(t_f/dt)+1,x_.shape[0]))
x = np.zeros((int(t_f/dt)+1,x_.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,x_.shape[0]))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=2):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label=("$f_%d$: %.2f" % (ind_ ,f[ind_])))
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("Quasi specie")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=2):
ax2.plot(np.arange(0,t_f+dt,dt),y.dot(f), label="fitness ",linestyle='-.')
ax2.set_ylabel('$f$', rotation=0)
ax2.set_ylim(0,10)
ax2.grid('off')
ax.legend(ncol=min(4,x.shape[1]))
ax2.legend(loc=4)
display(fig)
return q
items_geneQ = [
widgets.IntSlider(
value=5,
min=2,
max=12,
description="Genome Length",layout=widgets.Layout(width='50%', height='80px')),
widgets.FloatSlider(
value=0.05,
min=0.01,
max=0.8,
step = 0.05,
description="mutatation rate",layout=widgets.Layout(width='50%', height='80px')),
widgets.FloatSlider(
value=1,
min=0.0,
max=40,
step=0.05,
description="max_f",layout=widgets.Layout(width='50%', height='80px'))]
def _GeneCallQ(v):
q_ = genomeSequenceQ(items_geneQ[2].value,items_geneQ[1].value,items_geneQ[0].value)
label.value= "f_0 q = %.2f" % (q_*items_geneQ[2].value)
return
box_h = widgets.VBox(items_geneQ,layout=widgets.Layout(width='100%', height='120px'))
label = widgets.Label()
box_v = widgets.VBox([box_h,label])
items_geneQ[0].observe(_GeneCallQ,names='value')
items_geneQ[1].observe(_GeneCallQ,names='value')
items_geneQ[2].observe(_GeneCallQ,names='value')
display(box_v)
_GeneCallQ(0)
%%html
<center><img height="100%" width="100%" src="./Nature-coop/mutation_rates.png"/>
</center>
fig = plt.figure()
plt.close(fig)
def evolutionaryGame(x_,f,labels = None):
np.random.seed(0)
clear_output(wait=True)
t_f = 10
dt = 0.1
x_ = np.asarray(x_)
x_ = np.atleast_2d(x_).T
f = np.asarray(f)
def int_(t,x):
mean = x.T.dot(f.dot(x))
dev = x*(f.dot(x)-mean)
return dev
integ = integrate.ode(int_)
integ.set_integrator("dopri5").set_initial_value(np.asarray(x_))
y = np.zeros((int(t_f/dt)+1,x_.shape[0]))
x = np.zeros((int(t_f/dt)+1,x_.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,x_.shape[0]))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y[:,0]
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=2):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label="Type: %d" % (ind_+1) if labels is None else labels[ind_])
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("Quasi specie")
ax.legend(ncol=min(4,x.shape[1]))
display(fig)
items_strat = [
widgets.IntText(
value=2,
min=2,
max=5.0,
description="r",layout=widgets.Layout(width='50%', height='80px'))
,widgets.Button(
description="submit")]
def _EvolutionaryGames(v):
init = []
payoff = []
for ind_,obj in enumerate(objects_strat):
if ind_ < len(objects_strat)-1:
init.append(obj[0].value)
else:
if sum(init)>1:
status_labelstrat.value = "Initial rates should sum to <1"
return
else:
status_labelstrat.value = ""
init.append(1-sum(init))
rate_ = []
for j in range(0,len(objects_strat)):
rate_.append(obj[j+1].value)
payoff.append(rate_)
init = np.asarray(init)
payoff = np.asarray(payoff)
if len(objects_strat)==3:
status_labelstrat.value = "Determinant: %.2f" % linalg.det(payoff)
return evolutionaryGame(init,payoff)
objects_strat = []
status_labelstrat = None
box_vstrat = None
def call_back_mute(v):
global box_vstrat, status_labelstrat
if box_vstrat is not None:
box_vstrat.close()
count = items_strat[0].value
if count <2:
return
dsps = []
objects_strat[:] = []
for i in range(count):
if i < count-1:
specie = [widgets.FloatSlider(
value=1.0/count,
min=0,
max=1.0,
step=0.01,
description="init "+str(i+1),layout=widgets.Layout(width='100%', height='80px'))]
else:
specie = [widgets.Label(layout=widgets.Layout(width='100%', height='80px'))]
for j in range(count):
wid = widgets.IntSlider(
value=1,
min=-1,
max=5.0,
step=1,
description=str(chr(96+i*count+j+1)),layout=widgets.Layout(width='100%', height='80px'))
wid.observe(_EvolutionaryGames,names='value')
specie.append(wid)
specie[0].observe(_EvolutionaryGames,names='value')
box_h = widgets.HBox(specie,layout=widgets.Layout(width='100%', height='80px'))
objects_strat.append(specie)
dsps.append(box_h)
status_labelstrat = widgets.Label()
box_vstrat = widgets.VBox(dsps+[status_labelstrat],layout=widgets.Layout(width='100%', height='80px'))
display(box_vstrat)
_EvolutionaryGames("")
items_strat[1].on_click(call_back_mute)
box_h = widgets.HBox(items_strat,layout=widgets.Layout(width='100%', height='80px'))
display(box_h)
R = 3
S = 0
T = 5
P = 1
payoff = [[R,S],[T,P]]
evolutionaryGame([0.6,0.4],payoff,["Cooperate","Defect"])
def _EvolutionaryGamesProb(v):
R = 3
S = 0
T = 5
P = 1
m_ = prob_tomorrow.value
payoff = [[R*m_,S+(m_-1)*P],[T+(m_-1)*P,m_*P]]
return evolutionaryGame([0.99,0.01],payoff,["GRIM","ALLD"])
prob_tomorrow = widgets.FloatSlider(
value=1,
min=0,
max=10.0,
description="m_",layout=widgets.Layout(width='100%', height='80px'))
prob_tomorrow.observe(_EvolutionaryGamesProb,names="value")
display(prob_tomorrow)
p_1 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="p_1",layout=widgets.Layout(width='100%', height='80px'))
q_1 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="q_1",layout=widgets.Layout(width='100%', height='80px'))
user_1 = widgets.HBox([p_1,q_1],layout=widgets.Layout(width='100%', height='80px'))
p_2 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="p_2",layout=widgets.Layout(width='100%', height='80px'))
q_2 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="q_2",layout=widgets.Layout(width='100%', height='80px'))
user_2 = widgets.HBox([p_2,q_2],layout=widgets.Layout(width='100%', height='80px'))
box_pq = widgets.VBox([user_1,user_2],layout=widgets.Layout(width='100%', height='80px'))
def compute_expected_dist(p_1_v,p_2_v,q_1_v,q_2_v):
v_ = np.array([[p_1_v*p_2_v, p_1_v*(1-p_2_v), (1-p_1_v)*p_2_v, (1-p_1_v)*(1-p_2_v)],
[q_1_v*p_2_v, q_1_v*(1-p_2_v), (1-q_1_v)*p_2_v, (1-q_1_v)*(1-p_2_v)],
[p_1_v*q_2_v, p_1_v*(1-q_2_v), (1-p_1_v)*q_2_v, (1-p_1_v)*(1-q_2_v)],
[q_1_v*q_2_v, q_1_v*(1-q_2_v), (1-q_1_v)*q_2_v, (1-q_1_v)*(1-q_2_v)]]).T
w,vl = linalg.eig(v_)
return vl[:,0].real
def _EvolutionaryGamesGen(v):
p_1_v = p_1.value
p_2_v = p_2.value
q_1_v = q_1.value
q_2_v = q_2.value
p_1_1 = compute_expected_dist(p_1_v,p_1_v,q_1_v,q_1_v)
p_1_2 = compute_expected_dist(p_1_v,p_2_v,q_1_v,q_2_v)
p_2_1 = compute_expected_dist(p_2_v,p_1_v,q_2_v,q_1_v)
p_2_2 = compute_expected_dist(p_2_v,p_2_v,q_2_v,q_2_v)
R = 3
S = 0
T = 5
P = 1
#print(p_1_1)
payoff = [[R*p_1_1[0]+S*p_1_1[1]+T*p_1_1[2]+P**p_1_1[3], R*p_1_2[0]+S*p_1_2[1]+T*p_1_2[2]+P**p_1_2[3]],
[R*p_2_1[0]+S*p_2_1[1]+T*p_2_1[2]+P**p_2_1[3], R*p_2_2[0]+S*p_2_2[1]+T*p_2_2[2]+P**p_2_2[3]]]
payoff = np.array(payoff)
return evolutionaryGame([0.4,0.6],payoff,['Policy 1','Policy 2'])
p_1.observe(_EvolutionaryGamesGen,names="value")
p_2.observe(_EvolutionaryGamesGen,names="value")
q_1.observe(_EvolutionaryGamesGen,names="value")
q_2.observe(_EvolutionaryGamesGen,names="value")
display(box_pq)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Evolution
Step2: Selection-Mutation
Step3: Multiple species.
Step4: Genomes are Sequences
Step5: Fitness Landscape
Step6: Evolutionary Games
Step7: Prisoners Dillema
Step8: Direct Respirocity vs. Always Defect.
Step9: Reactive strategies
|
15,324 | <ASSISTANT_TASK:>
Python Code:
import os
import sys
import numpy
# Path for TubeTK libs and bin
#Values takend from TubeTK launcher
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/lib/")
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/lib/Release")
# Setting TubeTK Build Directory
TubeTK_BUILD_DIR=None
if 'TubeTK_BUILD_DIR' in os.environ:
TubeTK_BUILD_DIR = os.environ['TubeTK_BUILD_DIR']
else:
print('TubeTK_BUILD_DIR not found!')
print(' Set environment variable')
os.environ["TubeTK_BUILD_DIR"] = "C:/src/TubeTK_Python_ITK/TubeTK-build"
TubeTK_BUILD_DIR = os.environ["TubeTK_BUILD_DIR"]
#sys.exit( 1 )
if not os.path.exists(TubeTK_BUILD_DIR):
print('TubeTK_BUILD_DIR set by directory not found!')
print(' TubeTK_BUILD_DIR = ' + TubeTK_BUILD_DIR )
sys.exit(1)
try:
import itk
except:
ITK_BUILD_DIR = None
if 'ITK_BUILD_DIR' in os.environ:
ITK_BUILD_DIR = os.environ['ITK_BUILD_DIR']
else:
print('ITK_BUILD_DIR not found!')
print(' Set environment variable')
os.environ["ITK_BUILD_DIR"] = "C:/src/TubeTK_Python_R/ITK-build"
ITK_BUILD_DIR = os.environ["ITK_BUILD_DIR"]
#sys.exit( 1 )
if not os.path.exists(ITK_BUILD_DIR):
print('ITK_BUILD_DIR set by directory not found!')
print(' ITK_BUIDL_DIR = ' + ITK_BUILD_DIR )
sys.exit(1)
# Append ITK libs
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/Wrapping/Generators/Python/Release")
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/lib/Release")
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/lib")
# Append TubeTK libs
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/ITKModules/TubeTKITK-build/Wrapping/Generators/Python/Release")
import itk
from itk import TubeTKITK as itktube
Dimension = 3
PixelType = itk.UC
sampleTubeFileName = os.path.join(TubeTK_BUILD_DIR, 'MIDAS_Data\Branch-truth.tre')
templateImageFileName = os.path.join(TubeTK_BUILD_DIR, 'MIDAS_Data\Branch.n010.mha')
outputImageFileName = os.path.join(TubeTK_BUILD_DIR, 'Temporary\\testOutput.mha')
ImageType = itk.Image[PixelType, Dimension]
SpatialObjectType = itk.SpatialObject[Dimension]
TubeFileReaderType = itk.SpatialObjectReader[Dimension]
tubeFileReader = TubeFileReaderType.New()
tubeFileReader.SetFileName(sampleTubeFileName)
tubeFileReader.Update()
sampleSpatialObjectGroup = tubeFileReader.GetGroup()
ImageReaderType = itk.ImageFileReader[ImageType]
imageReader = ImageReaderType.New()
imageReader.SetFileName(templateImageFileName)
imageReader.Update()
image = imageReader.GetOutput()
# ITK Filter using ITK Spatial Object: OK
s2iType = itk.SpatialObjectToImageFilter[SpatialObjectType, ImageType]
s2i = s2iType.New()
s2i.SetInput(sampleSpatialObjectGroup)
s2i.Update()
#Save the output image
ImageWriterType = itk.ImageFileWriter[ImageType]
imageWriter = ImageWriterType.New()
imageWriter.SetInput(s2i.GetOutput())
imageWriter.SetFileName(outputImageFileName)
imageWriter.Update()
# TubeTK Filter using ITK Spatial Object: NOT OK
SpatialObject = SpatialObjectType.New()
#verify sampleSpatialObjectGroup
print (isinstance(sampleSpatialObjectGroup, SpatialObjectType))
TubesToImageFilterType = itktube.ConvertTubesToImage[Dimension, PixelType]
tubesToImageFilter = TubesToImageFilterType.New()
tubesToImageFilter.SetUseRadius(True)
tubesToImageFilter.SetTemplateImage(image)
#tubesToImageFilter.SetInput(sampleSpatialObjectGroup) # //Expected to work, but is not working
tubesToImageFilter.SetInput(SpatialObject) # //Expected to work, but is not working
tubesToImageFilter.Update()
# Another TubeTK Filter
FilterType = itktube.ComputeTubeFlyThroughImage[PixelType, Dimension]
Filter = FilterType.New()
Filter.SetInputImage(image)
Filter.SetTubeId(0)
Filter.SetInput(sampleSpatialObjectGroup) # //Expected to work, but is not working
Filter.Update()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialization
Step2: Reading the sample tube file and creating a sample spatial object group object
Step3: Reading the sample image file and creating an image object
Step4: Testing ITK filter using itk spatial object. There are two such filters
Step5: Testing TubeTK filter using itk Spatial Object
|
15,325 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(data="img/mr-t.jpg")
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
seed = 1234; np.random.seed(seed)
import seaborn as sns
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegression
%matplotlib inline
# import data (from seaborn, bc it gives you a df with labels)
iris = sns.load_dataset("iris")
iris.tail()
# inspect
sns.pairplot(iris, hue='species')
# get train/test split (no preprocessing)
X = iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']].values
y = iris['species'].values
# take a 75/25 split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=seed)
# verify array sizes
#[x.shape for x in [X_train, X_test, y_train, y_test]]
# fit default LR model
model = LogisticRegression()
model.fit(X_train, y_train)
# score on test (should be ~80-90%)
print("Accuracy = {:.2f}".format(model.score(X_test, y_test)))
# create a sample array with a few of each species from the original df
species_sample = iris.groupby(by='species').head(3)['species']
species_sample
# get a one-hot-encoded frame from the pandas method
pd.get_dummies(species_sample, prefix='ohe')
# encode the full y arrays
ohe_y_train = pd.get_dummies(y_train).values
ohe_y_test = pd.get_dummies(y_test).values
# create a new model
model = Sequential()
# add layers
# - the first hidden layer must specify the dimensions of the input layer (4x1, here)
# - this adds a 10-node, fully-connected layer following the input layer
model.add(Dense(10, input_dim=4))
# add an activation to the hidden layer
model.add(Activation('sigmoid'))
# add the output layer, and a softmax activation
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=["accuracy"])
# keras uses the same .fit() convention
model.fit(X_train, ohe_y_train, batch_size=1, nb_epoch=20, verbose=1)
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=0)
# score on test (should also be ~80-90%)
print("Accuracy = {:.2f}".format(metrics))
classes = model.predict_classes(X_test, verbose=0)
probs = model.predict_proba(X_test, verbose=0)
print('(class) [ probabilities ]')
print('-'*40)
for x in zip(classes, probs):
print('({}) {}'.format(x[0],x[1]))
np.random.seed(seed)
# instantiate the model
model = Sequential()
# hidden layer
model.add(Dense(10, input_shape=(4,)))
model.add(Activation('sigmoid'))
# output layer
model.add(Dense(3))
model.add(Activation('softmax'))
# set optimizer, loss fnc, and fit parameters
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, ohe_y_train, batch_size=1, nb_epoch=20, verbose=0)
# score on test set
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=0)
print("Accuracy = {:.2f}".format(metrics))
np.random.seed(seed)
# move the activations into the *layer* definition
model = Sequential([
Dense(10, input_dim=4, activation='sigmoid'),
Dense(3, activation='softmax'),
])
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, ohe_y_train, batch_size=1, nb_epoch=20, verbose=0)
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=0)
print("Accuracy = {:.2f}".format(metrics))
for layer in model.layers:
print('name: {}'.format(layer.name))
print('dims (in, out): ({}, {})'.format(layer.input_shape, layer.output_shape))
print('activation: {}'.format(layer.activation))
# nb: I believe the second weight array is the bias term
print('weight matrix: {}'.format(layer.get_weights()))
print()
from keras.datasets import mnist
# the data, shuffled and split between tran and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("X_train original shape", X_train.shape)
print("y_train original shape", y_train.shape)
print("y_test original shape", y_test.shape)
plt.figure(figsize=(8,4))
for i in range(3):
plt.subplot(1,3,i+1)
plt.imshow(X_train[i], cmap='gray', interpolation='none')
plt.title("Label: {}".format(y_train[i]))
# unroll 2D pixel data into 1D vector
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
# convert from original range (0-255) to 0-1
X_train = X_train / X_train.max()
X_test = X_test / X_test.max()
# OHE the y arrays
ohe_y_train = pd.get_dummies(y_train).values
ohe_y_test = pd.get_dummies(y_test).values
np.random.seed(seed)
model = Sequential([
Dense(512, input_dim=784, activation='relu'),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, ohe_y_train, batch_size=128, nb_epoch=5, verbose=1)
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=1)
print()
#print('Test loss:', loss)
print('Test accuracy:', metrics)
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.preprocessing import MinMaxScaler, StandardScaler
# load + inspect data
boston = load_boston()
X = boston.data
y = boston.target
labels = boston.feature_names
b_df = pd.DataFrame(X, columns=labels)
b_df.head()
# built-in information about the dataset and features
#print(boston.get("DESCR"))
# standardize the feature data (all features now 0-1)
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(X)
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=seed)
# build model
np.random.seed(seed)
model = Sequential([
# use a single hidden layer, also with 13 nodes
Dense(13, input_dim=13, activation='relu'),
Dense(1)
])
# compile + fit model
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=5, nb_epoch=100, verbose=0)
# evaluate on test data
loss, metrics = model.evaluate(X_test, y_test, verbose=1)
#print('Test loss:', loss)
#print('Test accuracy:', metrics)
print('MSE:', metrics)
y_pred = model.predict(X_test)
print('R^2 score:', r2_score(y_test, y_pred))
plt.figure(figsize=(8,8))
# compare the predictions to test
plt.plot(y_test, y_pred, 'o', alpha=0.75, label='model predictions')
# draw a diagonal
xy = np.linspace(min(y_test), max(y_test))
plt.plot(xy, xy, '--', label='truth = pred')
plt.title('3-layer NN')
plt.xlabel('truth ($k)')
plt.ylabel('prediction ($k)')
plt.legend(loc='best')
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('R^2:', r2_score(y_test, y_pred))
plt.figure(figsize=(8,8))
# compare the predictions to test
plt.plot(y_test, y_pred, 'o', alpha=0.75, label='model predictions')
# draw the diagonal
xy = np.linspace(min(y_test), max(y_test))
plt.plot(xy, xy, '--', label='truth = pred')
plt.title('Linear Regression')
plt.xlabel('truth ($k)')
plt.ylabel('prediction ($k)')
plt.legend(loc='best')
plt.figure(figsize=(8,8))
# where to position the bars/ticks
locs = range(len(model.coef_))
plt.barh(locs, model.coef_, align='center')
plt.yticks(locs, b_df.columns);
plt.title('linear regression coefficients')
plt.xlabel('value')
plt.ylabel('coefficient')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Foreword
Step2: Not bad for less than ten lines of code!
Step3: Now, instead of a single string label as our output (prediction), we have a 3x1 array, where each array item represents one of the possible species, and the non-zero binary value gives us the information we need.
Step4: Define, compile the model
Step5: For now, we'll stick to a 3-layer network
Step6: Finally, we compile the model. This is where we can specify the optimizer, and loss function.
Step7: Finally, we fit() the compiled model using the original training data, including the one-hot-encoded labels.
Step8: We can evaluate() our accuracy by using that method on the test data; this is equivalent to sklearn's score().
Step9: Not bad!
Step10: Now, more compact...
Step11: Or - even more succinctly - we can build the same model but collapse the structure definition because of Keras' flexible API...
Step12: Cool! It seems to work pretty well.
Step13: Saving the model
Step14: Remember that the MNIST data is an array of 28-pixel by 28-pixel "images" (brightness values), 60k in the training set, 10k in the test set.
Step15: Preprocessing and normalization
Step16: Now we'll built another Sequential model.
Step17: The shape of this network is now
Step18: If you recall the 2015 4C leaderboard, a score of 98% would have put you in the top 10% of submissions!
Step19: Since the feature values span many orders of magnitude, we should standardize them for optimization efficiency. Then we can split the data into our train/test split.
Step20: Cool!
Step21: And get similar $R^2$ values with a much more interpretable model. We can compare the prediction errors to the same chart from before...
Step22: And - the reason why a linear model should often be preferred - we can just look straight at the feature coefficients and read off how they relate to the predictions
|
15,326 | <ASSISTANT_TASK:>
Python Code:
%pylab inline --no-import-all
from lee2d import *
from create_test_histograms import *
create_test_histograms()
#check to make sure test_hists.root exists
!ls *root
# Specify the necessary info
max_local_significance = 4.
# choose u1, u2 thresholds for doing the scan.
# these are arbitrary
# if there are enough toys the choice shouldn't matter, but
# we may want to do some tests with other choices
this_u1, this_u2 = 0.1, 0.9
# Specify the root file with the histograms and their names
root_file_name = 'test_hists.root'
num_toy_scans = 25
names_of_toy_likleihood_scans = [('scan_toy_%d' %(i)) for i in range(11,num_toy_scans)]
def convert_hist_to_numpy(hist):
a little helper script
temp = np.zeros((hist.GetNbinsX(), hist.GetNbinsY()))
for i in range(temp.shape[0]):
for j in range(temp.shape[1]):
temp[i,j] = hist.GetBinContent(i+1, j+1)
return temp
# Read in histograms, convert them to numpy arrays
inFile = ROOT.TFile(root_file_name, 'READ')
likelihoodScans = []
for histName in names_of_toy_likleihood_scans:
inHist = inFile.Get(histName)
temp = convert_hist_to_numpy(inHist)
likelihoodScans.append(temp)
from scipy.ndimage import grey_closing, binary_closing
def fill_holes(array):
zero_array = array==0.
temp = grey_closing(array, size=2)*zero_array
return temp+array
def get_euler_characteristics(listOfScans, u1=0.1, u2=0.9):
loop through the likleihood scans and calculate expectation
of Euler characteristic for excursion sets above levels u1, u2
n_plots = 3
plt.figure(figsize=(9,n_plots*3))
phis = np.zeros((len(listOfScans),2))
for scan_no, scan in enumerate(listOfScans):
# fill holes from failures in original likelihood
scan = fill_holes(scan)
#get excursion sets above those two levels
exc1 = (scan>u1) + 0. #add 0. to convert from bool to double
exc2 = (scan>u2) + 0.
#print '\nu1,u2 = ', u1, u2
if scan_no < n_plots:
plt.subplot(n_plots,3,3*scan_no+1)
aspect = 1.*scan.shape[0]/scan.shape[1]
plt.imshow(scan.T, cmap='gray', aspect=aspect)
plt.subplot(n_plots,3,3*scan_no+2)
plt.imshow(exc1.T, cmap='gray', aspect=aspect)
plt.subplot(n_plots,3,3*scan_no+3)
plt.imshow(exc2.T, cmap='gray', aspect=aspect)
phi1 = calculate_euler_characteristic(exc1)
phi2 = calculate_euler_characteristic(exc2)
#print 'phi1, phi2 = ', phi1, phi2
phis[scan_no] = [phi1, phi2]
plt.savefig('islands.png')
print 'Exp phi_0=%f, phi_2=%f' %(mean(phis[:,0]), mean(phis[:,1]))
return mean(phis[:,0]), mean(phis[:,1])
expphi1, expphi2 = get_euler_characteristics(likelihoodScans, u1=this_u1, u2=this_u2)
global_p_value = do_LEE_correction(max_local_significance, this_u1, this_u2, expphi1, expphi2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create some test histograms
Step4: Calculate global p-value based on maximum local significance and several toy likelihood scans
Step5: The columns of the below are for
|
15,327 | <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_blobs
X, y = make_blobs(centers=2, random_state=0)
print('X ~ n_samples x n_features:', X.shape)
print('y ~ n_samples:', y.shape)
print('\n5 primeros ejemplos:\n', X[:5, :])
print('\n5 primeras etiquetas:', y[:5])
plt.scatter(X[y == 0, 0], X[y == 0, 1],
c='blue', s=40, label='0')
plt.scatter(X[y == 1, 0], X[y == 1, 1],
c='red', s=40, label='1', marker='s')
plt.xlabel('primera característica')
plt.ylabel('segunda característica')
plt.legend(loc='upper right');
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=1234,
stratify=y)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
X_train.shape
y_train.shape
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
print(prediction)
print(y_test)
np.mean(prediction == y_test)
classifier.score(X_test, y_test)
classifier.score(X_train, y_train)
from figures import plot_2d_separator
plt.scatter(X[y == 0, 0], X[y == 0, 1],
c='blue', s=40, label='0')
plt.scatter(X[y == 1, 0], X[y == 1, 1],
c='red', s=40, label='1', marker='s')
plt.xlabel("primera característica")
plt.ylabel("segunda característica")
plot_2d_separator(classifier, X)
plt.legend(loc='upper right');
print(classifier.coef_)
print(classifier.intercept_)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=20)
knn.fit(X_train, y_train)
plt.scatter(X[y == 0, 0], X[y == 0, 1],
c='blue', s=40, label='0')
plt.scatter(X[y == 1, 0], X[y == 1, 1],
c='red', s=40, label='1', marker='s')
plt.xlabel("primera característica")
plt.ylabel("segunda característica")
plot_2d_separator(knn, X)
plt.legend(loc='upper right');
knn.score(X_test, y_test)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Como los datos son bidimensionales, podemos representar cada punto en un sistema de coordenadas (ejes x e y).
Step2: La clasificación es una tarea supervisada y, ya que estamos interesados en su rendimiento en datos no utilizados para entrenar, vamos a dividir los datos en dos partes
Step3: El API de un estimador de scikit-learn
Step4: Ahora, instanciamos el estimador
Step5: Para construir el modelo a partir de nuestros datos, esto es, aprender a clasificar nuevos puntos, llamamos a la función fit pasándole los datos de entrenamiento, y las etiquetas correspondientes (la salida deseada para los datos de entrenamiento)
Step6: Algunos métodos de los estimadores se devuelven a sí mismos por defecto. Esto es, después de ejecutar el código anterior, verás los parámetros por defecto de esta instancia particular de LogisticRegression. Otra forma de obtener los parámetros de inicialización de un estimador es usar classifier.get_params(), que devuelve un diccionario de parámetros.
Step7: Podemos comparar el resultado con las etiquetas reales
Step8: Podemos evaluar nuestro modelo cuantitativamente utilizando la proporción de patrones correctos. A esto se le llama accuracy
Step9: Existe una función útil, score, que incluyen todos los clasificadores de scikit-learn para obtener la medida de rendimiento a partir de los datos de test
Step10: A veces es útil comparar el rendimiento en generalización (en el conjunto de test) con el rendimiento en entrenamiento
Step11: LogisticRegression es un modelo lineal, lo que significa que creará una frontera de decisión que es lineal en el espacio de entrada. En 2D, esto quiere decir que generará una línea recta para separar los puntos azules de los rojos
Step12: Parámetros estimados
Step13: Otro clasificador
Step14: Ahora vamos a modificar un parámetro de KNeighborsClassifier para que solo se examine el vecino más cercano
Step15: Ajustamos el modelo con nuestros datos de entrenamiento.
|
15,328 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@test {"skip": true}
!pip install tensorflow-lattice pydot
import tensorflow as tf
import collections
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
train_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv')
train_dataframe.head()
test_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv')
test_dataframe.head()
# Features:
# - star_rating rating out of 5 stars (1-5)
# - word_count number of words in the review
# - is_amazon 1 = reviewed on amazon; 0 = reviewed on artifact website
# - includes_photo if the review includes a photo of the puzzle
# - num_helpful number of people that found this review helpful
# - num_reviews total number of reviews for this puzzle (we construct)
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'star_rating', 'word_count', 'is_amazon', 'includes_photo', 'num_helpful',
'num_reviews'
]
def extract_features(dataframe, label_name):
# First we extract flattened features.
flattened_features = {
feature_name: dataframe[feature_name].values.astype(float)
for feature_name in feature_names[:-1]
}
# Construct mapping from puzzle name to feature.
star_rating = collections.defaultdict(list)
word_count = collections.defaultdict(list)
is_amazon = collections.defaultdict(list)
includes_photo = collections.defaultdict(list)
num_helpful = collections.defaultdict(list)
labels = {}
# Extract each review.
for i in range(len(dataframe)):
row = dataframe.iloc[i]
puzzle_name = row['puzzle_name']
star_rating[puzzle_name].append(float(row['star_rating']))
word_count[puzzle_name].append(float(row['word_count']))
is_amazon[puzzle_name].append(float(row['is_amazon']))
includes_photo[puzzle_name].append(float(row['includes_photo']))
num_helpful[puzzle_name].append(float(row['num_helpful']))
labels[puzzle_name] = float(row[label_name])
# Organize data into list of list of features.
names = list(star_rating.keys())
star_rating = [star_rating[name] for name in names]
word_count = [word_count[name] for name in names]
is_amazon = [is_amazon[name] for name in names]
includes_photo = [includes_photo[name] for name in names]
num_helpful = [num_helpful[name] for name in names]
num_reviews = [[len(ratings)] * len(ratings) for ratings in star_rating]
labels = [labels[name] for name in names]
# Flatten num_reviews
flattened_features['num_reviews'] = [len(reviews) for reviews in num_reviews]
# Convert data into ragged tensors.
star_rating = tf.ragged.constant(star_rating)
word_count = tf.ragged.constant(word_count)
is_amazon = tf.ragged.constant(is_amazon)
includes_photo = tf.ragged.constant(includes_photo)
num_helpful = tf.ragged.constant(num_helpful)
num_reviews = tf.ragged.constant(num_reviews)
labels = tf.constant(labels)
# Now we can return our extracted data.
return (star_rating, word_count, is_amazon, includes_photo, num_helpful,
num_reviews), labels, flattened_features
train_xs, train_ys, flattened_features = extract_features(train_dataframe, 'Sales12-18MonthsAgo')
test_xs, test_ys, _ = extract_features(test_dataframe, 'SalesLastSixMonths')
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 500
MIDDLE_DIM = 3
MIDDLE_LATTICE_SIZE = 2
MIDDLE_KEYPOINTS = 16
OUTPUT_KEYPOINTS = 8
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='star_rating',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['star_rating'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='word_count',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['word_count'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='is_amazon',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='includes_photo',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='num_helpful',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_helpful'], num_keypoints=5),
# Larger num_helpful indicating more trust in star_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="star_rating", trust_type="trapezoid"),
],
),
tfl.configs.FeatureConfig(
name='num_reviews',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_reviews'], num_keypoints=5),
)
]
# Model config defines the model structure for the aggregate function model.
aggregate_function_model_config = tfl.configs.AggregateFunctionConfig(
feature_configs=feature_configs,
middle_dimension=MIDDLE_DIM,
middle_lattice_size=MIDDLE_LATTICE_SIZE,
middle_calibration=True,
middle_calibration_num_keypoints=MIDDLE_KEYPOINTS,
middle_monotonicity='increasing',
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=OUTPUT_KEYPOINTS,
output_initialization=np.linspace(
min_label, max_label, num=OUTPUT_KEYPOINTS))
# An AggregateFunction premade model constructed from the given model config.
aggregate_function_model = tfl.premade.AggregateFunction(
aggregate_function_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
aggregate_function_model, show_layer_names=False, rankdir='LR')
aggregation_layers = [
layer for layer in aggregate_function_model.layers
if isinstance(layer, tfl.layers.Aggregation)
]
tf.keras.utils.plot_model(
aggregation_layers[0].model, show_layer_names=False, rankdir='LR')
aggregate_function_model.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
aggregate_function_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(aggregate_function_model.evaluate(test_xs, test_ys))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TF Lattice 집계 함수 모델
Step2: 필수 패키지 가져오기
Step3: 퍼즐 데이터세트 다운로드하기
Step4: 특성과 레이블 추출하기 및 변환하기
Step5: 이 가이드에서 훈련에 사용되는 기본값 설정하기
Step6: 특성 구성
Step7: 특성 구성 정의하기
Step8: 집계 함수 모델
Step9: 각 집계 레이어의 출력은 비 정형 입력에 대해 보정된 격자의 평균 출력입니다. 다음은 첫 번째 집계 레이어 내부에서 사용되는 모델입니다.
Step10: 이제 다른 tf.keras.Model과 마찬가지로 모델을 데이터에 맞게 컴파일하고 적합하도록 맞춥니다.
Step11: 모델을 훈련한 후 테스트세트에서 평가할 수 있습니다.
|
15,329 | <ASSISTANT_TASK:>
Python Code:
from numpy import concatenate, array
from numpy.random import randn
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
num = 200
d1 = concatenate((randn(1,num),10.*randn(1,num)),0)
d2 = concatenate((randn(1,num),10.*randn(1,num)),0)+array([[10.],[0.]])
d3 = concatenate((randn(1,num),10.*randn(1,num)),0)+array([[0.],[100.]])
d4 = concatenate((randn(1,num),10.*randn(1,num)),0)+array([[10.],[100.]])
rectangle = concatenate((d1,d2,d3,d4),1)
totalPoints = 800
import matplotlib.pyplot as pyplot
%matplotlib inline
figure,axis = pyplot.subplots(1,1)
axis.plot(rectangle[0], rectangle[1], 'o', color='r', markersize=5)
axis.set_xlim(-5,15)
axis.set_ylim(-50,150)
axis.set_title('Toy data : Rectangle')
pyplot.show()
from shogun import *
import shogun as sg
train_features = features(rectangle)
# number of clusters
k = 2
# distance metric over feature matrix - Euclidean distance
distance = sg.distance('EuclideanDistance')
distance.init(train_features, train_features)
# KMeans object created
kmeans = KMeans(k, distance)
# KMeans training
kmeans.train()
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
def plotResult(title = 'KMeans Plot'):
figure,axis = pyplot.subplots(1,1)
for i in range(totalPoints):
if result[i]==0.0:
axis.plot(rectangle[0,i], rectangle[1,i], 'o', color='g', markersize=3)
else:
axis.plot(rectangle[0,i], rectangle[1,i], 'o', color='y', markersize=3)
axis.plot(centers[0,0], centers[1,0], 'ko', color='g', markersize=10)
axis.plot(centers[0,1], centers[1,1], 'ko', color='y', markersize=10)
axis.set_xlim(-5,15)
axis.set_ylim(-50,150)
axis.set_title(title)
pyplot.show()
plotResult('KMeans Results')
from numpy import array
initial_centers = array([[0.,10.],[50.,50.]])
# initial centers passed
kmeans = KMeans(k, distance, initial_centers)
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Hand initialized KMeans Results 1')
new_initial_centers = array([[5.,5.],[0.,100.]])
# set new initial centers
kmeans.set_initial_centers(new_initial_centers)
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Hand initialized KMeans Results 2')
# set flag for using KMeans++
kmeans = KMeans(k, distance, True)
# set KMeans++ flag
kmeans.set_use_kmeanspp(True)
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('KMeans with KMeans++ Results')
#unset KMeans++ flag
kmeans.set_use_kmeanspp(False)
# set training method to mini-batch
kmeans = KMeansMiniBatch(k, distance)
# set both parameters together batch size-2 and no. of iterations-100
kmeans.set_mb_params(2,100)
# OR
# set batch size-2
kmeans.set_batch_size(2)
# set no. of iterations-100
kmeans.set_mb_iter(100)
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Mini-batch KMeans Results')
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data'))
feats = []
# read data from file
for line in f:
words = line.rstrip().split(',')
feats.append([float(i) for i in words[0:4]])
f.close()
# create observation matrix
obsmatrix = array(feats).T
# plot the data
figure,axis = pyplot.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(obsmatrix[2,0:50], obsmatrix[3,0:50], 'o', color='green', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(obsmatrix[2,50:100], obsmatrix[3,50:100], 'o', color='red', markersize=5)
# Last 50 data belong to Iris Virginica, plotted in blue
axis.plot(obsmatrix[2,100:150], obsmatrix[3,100:150], 'o', color='blue', markersize=5)
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
axis.set_title('3 varieties of Iris plants')
pyplot.show()
def apply_kmeans_iris(data):
# wrap to Shogun features
train_features = features(data)
# number of cluster centers = 3
k = 3
# distance function features - euclidean
distance = sg.distance('EuclideanDistance')
distance.init(train_features, train_features)
# initialize KMeans object
kmeans = KMeans(k, distance)
# use kmeans++ to initialize centers [play around: change it to False and compare results]
kmeans.set_use_kmeanspp(True)
# training method is Lloyd by default [play around: change it to mini-batch by uncommenting the following lines]
#kmeans.set_train_method(KMM_MINI_BATCH)
#kmeans.set_mbKMeans_params(20,30)
# training kmeans
kmeans.train(train_features)
# labels for data points
result = kmeans.apply()
return result
result = apply_kmeans_iris(obsmatrix)
# plot the clusters over the original points in 2 dimensions
figure,axis = pyplot.subplots(1,1)
for i in range(150):
if result[i]==0.0:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ko',color='r', markersize=5)
elif result[i]==1.0:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ko',color='g', markersize=5)
else:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ko',color='b', markersize=5)
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
axis.set_title('Iris plants clustered based on attributes')
pyplot.show()
from numpy import ones, zeros
# first 50 are iris sensosa labelled 0, next 50 are iris versicolour labelled 1 and so on
labels = concatenate((zeros(50),ones(50),2.*ones(50)),0)
# bind labels assigned to Shogun multiclass labels
ground_truth = MulticlassLabels(array(labels,dtype='float64'))
from numpy import nonzero
def analyzeResult(result):
# shogun object for clustering accuracy
AccuracyEval = ClusteringAccuracy()
# changes the labels of result (keeping clusters intact) to produce a best match with ground truth
AccuracyEval.best_map(result, ground_truth)
# evaluates clustering accuracy
accuracy = AccuracyEval.evaluate(result, ground_truth)
# find out which sample points differ from actual labels (or ground truth)
compare = result.get_labels()-labels
diff = nonzero(compare)
return (diff,accuracy)
(diff,accuracy_4d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_4d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5)
axis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7)
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
axis.set_title('Difference')
pyplot.show()
from numpy import dot
def apply_pca_to_data(target_dims):
train_features = features(obsmatrix)
submean = PruneVarSubMean(False)
submean.init(train_features)
submean.apply_to_feature_matrix(train_features)
preprocessor = PCA()
preprocessor.set_target_dim(target_dims)
preprocessor.init(train_features)
pca_transform = preprocessor.get_transformation_matrix()
new_features = dot(pca_transform.T, train_features)
return new_features
oneD_matrix = apply_pca_to_data(1)
figure,axis = pyplot.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(oneD_matrix[0,0:50], zeros(50), 'o', color='green', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(oneD_matrix[0,50:100], zeros(50), 'o', color='red', markersize=5)
# Last 50 data belong to Iris Virginica, plotted in blue
axis.plot(oneD_matrix[0,100:150], zeros(50), 'o', color='blue', markersize=5)
axis.set_xlim(-5,5)
axis.set_ylim(-1,1)
axis.set_title('3 varieties of Iris plants')
pyplot.show()
result = apply_kmeans_iris(oneD_matrix)
(diff,accuracy_1d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_1d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(oneD_matrix[0,:],zeros(150),'x',color='black', markersize=5)
axis.plot(oneD_matrix[0,diff],zeros(len(diff)),'x',color='r', markersize=7)
axis.set_xlim(-5,5)
axis.set_ylim(-1,1)
axis.set_title('Difference')
pyplot.show()
twoD_matrix = apply_pca_to_data(2)
figure,axis = pyplot.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(twoD_matrix[0,0:50], twoD_matrix[1,0:50], 'o', color='green', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(twoD_matrix[0,50:100], twoD_matrix[1,50:100], 'o', color='red', markersize=5)
# Last 50 data belong to Iris Virginica, plotted in blue
axis.plot(twoD_matrix[0,100:150], twoD_matrix[1,100:150], 'o', color='blue', markersize=5)
axis.set_title('3 varieties of Iris plants')
pyplot.show()
result = apply_kmeans_iris(twoD_matrix)
(diff,accuracy_2d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_2d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(twoD_matrix[0,:],twoD_matrix[1,:],'x',color='black', markersize=5)
axis.plot(twoD_matrix[0,diff],twoD_matrix[1,diff],'x',color='r', markersize=7)
axis.set_title('Difference')
pyplot.show()
threeD_matrix = apply_pca_to_data(3)
result = apply_kmeans_iris(threeD_matrix)
(diff,accuracy_3d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_3d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5)
axis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7)
axis.set_title('Difference')
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
pyplot.show()
from scipy.interpolate import interp1d
from numpy import linspace
x = array([1, 2, 3, 4])
y = array([accuracy_1d, accuracy_2d, accuracy_3d, accuracy_4d])
f = interp1d(x, y)
xnew = linspace(1,4,10)
pyplot.plot(x,y,'o',xnew,f(xnew),'-')
pyplot.xlim([0,5])
pyplot.xlabel('no. of dims')
pyplot.ylabel('Clustering Accuracy')
pyplot.title('PCA Results')
pyplot.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The toy data created above consists of 4 gaussian blobs, having 200 points each, centered around the vertices of a rectancle. Let's plot it for convenience.
Step2: With data at our disposal, it is time to apply KMeans to it using the KMeans class in Shogun. First we construct Shogun features from our data
Step3: Next we specify the number of clusters we want and create a distance object specifying the distance metric to be used over our data for our KMeans training
Step4: Next, we create a KMeans object with our desired inputs/parameters and train
Step5: Now that training has been done, let's get the cluster centers and label for each data point
Step6: Finally let us plot the centers and the data points (in different colours for different clusters)
Step7: <b>Note
Step8: Now, let's first get results by repeating the rest of the steps
Step9: The other way to initialize centers by hand is as follows
Step10: Let's complete the rest of the code to get results.
Step11: Note the difference that inititial cluster centers can have on final result.
Step12: The other way to initilize using KMeans++ is as follows
Step13: Completing rest of the steps to get result
Step14: To switch back to random initialization, you may use
Step15: Training Methods
Step16: In mini-batch KMeans it is compulsory to set batch-size and number of iterations. These parameters can be set together or one after the other.
Step17: Completing the code to get results
Step18: Applying KMeans on Real Data
Step19: In the above plot we see that the data points labelled Iris Sentosa form a nice separate cluster of their own. But in case of other 2 varieties, while the data points of same label do form clusters of their own, there is some mixing between the clusters at the boundary. Now let us apply KMeans algorithm and see how well we can extract these clusters.
Step20: Now let us create a 2-D plot of the clusters formed making use of the two most important features (petal length and petal width) and compare it with the earlier plot depicting the actual labels of data points.
Step21: From the above plot, it can be inferred that the accuracy of KMeans algorithm is very high for Iris dataset. Don't believe me? Alright, then let us make use of one of Shogun's clustering evaluation techniques to formally validate the claim. But before that, we have to label each sample in the dataset with a label corresponding to the class to which it belongs.
Step22: Now we can compute clustering accuracy making use of the ClusteringAccuracy class in Shogun
Step23: In the above plot, wrongly clustered data points are marked in red. We see that the Iris Sentosa plants are perfectly clustered without error. The Iris Versicolour plants and Iris Virginica plants are also clustered with high accuracy, but there are some plant samples of either class that have been clustered with the wrong class. This happens near the boundary of the 2 classes in the plot and was well expected. Having mastered KMeans, it's time to move on to next interesting topic.
Step24: Next, let us get an idea of the data in 1-D by plotting it.
Step25: Let us now apply KMeans to the 1-D data to get clusters.
Step26: Now that we have the results, the inevitable step is to check how good these results are.
Step27: 2-Dimensional Representation
Step28: STEP 2
Step29: STEP 3
Step30: 3-Dimensional Representation
Step31: STEP 2
Step32: STEP 3
Step33: Finally, let us plot clustering accuracy vs. number of dimensions to consolidate our results.
|
15,330 | <ASSISTANT_TASK:>
Python Code:
def ex_func(ltr):
ltr = 1
print ltr # 변수 ltr을 print로 출력했습니다.
ex_func('a') # 입력된 변수가 잘 출력되었습니다.
def ex_func(ltr):
ltr = 1
print ltr
var = ex_func('a')
def ex_func(ltr):
ltr = 1
print ltr
var = ex_func('a')
print var # None이 출력됩니다. 실제로는 var에 아무런 값도 없다는 뜻입니다.
var + 1 # TypeError가 발생합니다. 실제로 var이 아무런 값도 가지지 않았기 때문입니다.
def ex_func(ltr):
ltr = 1
return ltr # print가 return으로 바뀌었습니다.
ex_func('a') # 언뜻보면 별반 다를게 없어보입니다.
var = ex_func('a')
print var
def ex_func(ltr):
ltr = 1
return ltr
var = ex_func('a')
var + 1
text = ['r', 'a', 'n', 'd', 'o', 'm']
text_filter = 'aeiouAEIOU'
for ltr in text:
if ltr in text_filter:
print ltr
# range(x, y): x이상 y미만의 숫자를 list 타입으로 반환합니다.
lists = range(1, 11)
print type(lists)
print lists
for x in [1, 2, 3]: # x에 1, 2, 3이 반복적으로 대입되면서
print x # 들여쓰기로 표현된 해당 라인의 명령문이 반복적으로 수행됩니다.
for x in [1, 2, 3]:
print x
x = 1
print x
x = 2
print x
x = 3
print x
# 수행결과가 완벽하게 동일합니다.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 위의 코드에서 우리는 함수 'ex_func'이 함수 내에서 입력받은 변수를 잘 출력해주는 것을 확인했습니다.
Step2: 언뜻보면 지정한 값이 잘 대입된듯합니다.
Step3: 그럼 함수 ex_func이 값을 가지게 하려면 어떻게 해야할까요?
Step4: 이번엔 정확히 1이 출력됩니다. 이는 함수문 내에서 지정된 값으로, 원래는 함수를 벗어나면 사라졌어야합니다.
Step5: 덧셈이 정상적으로 작동하여 2가 출력되었습니다.
Step6: 7.1.12
Step7: 7.1.15
Step8: 결론적으로 x에 1, 2, 3이 순차적으로 대입된다는 의미는
|
15,331 | <ASSISTANT_TASK:>
Python Code:
import jax
import jax.numpy as jnp
from jax import lax
try:
from probml_utils import latexify, savefig
except:
%pip install git+https://github.com/probml/probml-utils.git
from probml_utils import latexify, savefig
try:
import blackjax
except:
%pip install blackjax
import blackjax
try:
from tensorflow_probability.substrates import jax as tfp
except ModuleNotFoundError:
%pip install -qqq tensorflow_probability
from tensorflow_probability.substrates import jax as tfp
try:
from rich import print
except ModuleNotFoundError:
%pip install -qqq rich
from rich import print
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import warnings
import os
warnings.filterwarnings("ignore")
dist = tfp.distributions
plt.rc("font", size=10) # controls default text sizes
plt.rc("axes", labelsize=12) # fontsize of the x and y labels
plt.rc("legend", fontsize=12) # legend fontsize
plt.rc("figure", titlesize=15) # fontsize of the figure title
latexify(width_scale_factor=1, fig_height=1.5) # to apply latexify, set LATEXIFY=1 in environment variable
# helper functions
def prior_dist():
return dist.Beta(concentration1=1.0, concentration0=1.0)
def likelihood_dist(theta):
return dist.Bernoulli(probs=theta)
# Use same data as https://github.com/probml/probml-notebooks/blob/main/notebooks/beta_binom_approx_post_pymc.ipynb
key = jax.random.PRNGKey(128)
dataset = np.repeat([0, 1], (10, 1))
n_samples = len(dataset)
print(f"Dataset: {dataset}")
n_heads = dataset.sum()
n_tails = n_samples - n_heads
# closed form of beta posterior
a = prior_dist().concentration1
b = prior_dist().concentration0
exact_posterior = dist.Beta(concentration1=a + n_heads, concentration0=b + n_tails)
theta_range = jnp.linspace(0.01, 0.99, 100)
ax = plt.gca()
ax2 = ax.twinx()
posterior_prob = exact_posterior.prob(theta_range)
(plt2,) = ax2.plot(theta_range, posterior_prob, "g--", label="true posterior")
(plt3,) = ax2.plot(theta_range, prior_dist().prob(theta_range), label="Prior")
theta_map = theta_range[jnp.argmax(posterior_prob)]
y_max = posterior_prob.max()
# plt4 = ax2.vlines(theta_map,0,y_max ,label=f"$\\theta\_map={round(theta_map,2)}$", color="black", linestyle="-.")
likelihood = jax.vmap(lambda x: jnp.prod(likelihood_dist(x).prob(dataset)))(theta_range)
(plt1,) = ax.plot(theta_range, likelihood, "r-.", label="Likelihood")
ax.set_xlabel("theta")
ax.set_ylabel("Likelihood")
ax2.set_ylabel("Prior & Posterior")
ax2.legend(handles=[plt1, plt2, plt3], bbox_to_anchor=(1.6, 1));
def log_prior_likelihood_fn(theta):
likelihood_log_prob = likelihood_dist(theta).log_prob(dataset).sum() # log probability of likelihood
prior_log_prob = prior_dist().log_prob(theta) # log probability of prior
return likelihood_log_prob + prior_log_prob # log_prior_liklihood
logprob = lambda x: log_prior_likelihood_fn(**x)
inv_mass_matrix = jnp.array([5])
num_integration_steps = 60
step_size = 1e-3
hmc = blackjax.hmc(logprob, step_size, inv_mass_matrix, num_integration_steps)
initial_position = {"theta": 0.5}
initial_state = hmc.init(initial_position)
hmc_kernel = jax.jit(hmc.step)
def inference_loop(rng_key, kernel, initial_state, num_samples):
@jax.jit
def one_step(state, rng_key):
state, _ = kernel(rng_key, state)
return state, state
keys = jax.random.split(rng_key, num_samples)
_, states = jax.lax.scan(one_step, initial_state, keys)
return states
n_chains = 4
n_samples = 1000
keys = jax.random.split(key, n_chains)
chain_states = jax.vmap(inference_loop, in_axes=(0, None, None, None))(keys, hmc_kernel, initial_state, n_samples)
chains = chain_states.position["theta"].block_until_ready()
plt.figure()
plt.title("Density of samples")
plt.plot(theta_range, exact_posterior.prob(theta_range), "b-.", label="true posterior")
colors = ["tab:green", "tab:blue", "tab:orange", "tab:red"]
for no, chain in enumerate(chains):
sns.kdeplot(chain, clip=(0.0, 1.0), label=f"chain {no+1}", alpha=0.5, color=colors[no])
plt.xlabel("$\\theta$")
plt.ylabel("$p(\\theta)$")
sns.despine()
plt.legend();
plt.figure()
plt.title("Trace plot")
for no, chain in enumerate(chains):
plt.plot(chain, label=f"chain {no+1}", alpha=0.5)
plt.xlabel("sample")
plt.ylabel("$\\theta$")
sns.despine()
plt.legend();
LATEXIFY = "LATEXIFY" in os.environ
FIG_SIZE = (10, 2) if not LATEXIFY else None
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=FIG_SIZE)
ax1.set_title("Density of samples")
colors = ["tab:green", "tab:blue", "tab:orange", "tab:red"]
for no, chain in enumerate(chains):
sns.kdeplot(chain, ax=ax1, clip=(0.0, 1.0), label=f"chain {no+1}", color=colors[no])
ax1.set_xlabel("$\\theta$")
ax1.set_ylabel("$p(\\theta)$")
ax1.legend(bbox_to_anchor=(0.55, 1))
sns.despine()
ax2.set_title("Trace plot")
for no, chain in enumerate(chains):
ax2.plot(chain, label=f"chain {no+1}", alpha=0.5, color=colors[no])
ax2.set_xlabel("sample")
ax2.set_ylabel("$\\theta$")
sns.despine()
savefig("bb_hmc_trace") # to save figure set FIG_DIR="path/to/figure" enviornment variable
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Prior, Likelihood, and True Posterior
Step3: HMC using Blackjax
Step4: Density of samples and True posterior
Step5: Trace plot
|
15,332 | <ASSISTANT_TASK:>
Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import earnings_releases
# or if you want to import the free dataset, use:
# from quantopian.interactivedata.eventvestor import earnings_releases_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
earnings_releases.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
earnings_releases.count()
# Let's see what the data looks like. We'll grab the first three rows.
earnings_releases[:3]
# get apple's sid first
aapl_sid = symbols('AAPL').sid
aapl_earnings = earnings_releases[('2011-12-31' < earnings_releases['asof_date']) & (earnings_releases['asof_date'] <'2013-01-01') & (earnings_releases.sid==aapl_sid)]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
aapl_earnings.sort('asof_date')
# manipulate with Blaze first:
big_earnings = earnings_releases[earnings_releases.revenue > 40000]
# now that we've got a much smaller object (len: ~2167 rows), we can convert it to a pandas DataFrame
df = odo(big_earnings, pd.DataFrame)
df = df[['sid', 'asof_date','revenue']].dropna()
df.sort('revenue',ascending=False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's go over the columns
Step2: Now suppose we want a DataFrame of all earnings releases with revenue over 30 billion dollars. For those earnings releases, we only want the sid and the asof_date.
|
15,333 | <ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig(level=logging.INFO)
import warnings
import logging
warnings.simplefilter('default', DeprecationWarning)
logging.captureWarnings(True)
logging.basicConfig(level=logging.INFO)
import warnings
import logging
warnings.simplefilter('always', DeprecationWarning)
logging.captureWarnings(True)
logging.basicConfig(level=logging.DEBUG)
logging.basicConfig(level=logging.INFO)
logging.basicConfig(level=logging.DEBUG)
logging.basicConfig(filename='example.log',
level=logging.INFO)
import warnings
warnings.simplefilter('always', DeprecationWarning)
logging.captureWarnings(True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This will prompt messagges with the priority level of "information".
Step2: Maximum information (aka 'Information overload')
Step3: Adjusting the level of detail/verbosity
Step4: When debugging your programm you might want to recieve more
Step5: Creating logfiles
Step6: (Deprecation) warnings
|
15,334 | <ASSISTANT_TASK:>
Python Code:
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from scipy.misc import imread
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from subprocess import check_output
print(check_output(["ls", "../input/train/"]).decode("utf8"))
sub_folders = check_output(["ls", "../input/train/"]).decode("utf8").strip().split('\n')
count_dict = {}
for sub_folder in sub_folders:
num_of_files = len(check_output(["ls", "../input/train/"+sub_folder]).decode("utf8").strip().split('\n'))
print("Number of files for the species",sub_folder,":",num_of_files)
count_dict[sub_folder] = num_of_files
plt.figure(figsize=(12,4))
sns.barplot(list(count_dict.keys()), list(count_dict.values()), alpha=0.8)
plt.xlabel('Fish Species', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.show()
num_test_files = len(check_output(["ls", "../input/test_stg1/"]).decode("utf8").strip().split('\n'))
print("Number of test files present :", num_test_files)
train_path = "../input/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
for sub_folder in sub_folders:
file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
for file_name in file_names:
im_array = imread(train_path+sub_folder+"/"+file_name)
size = "_".join(map(str,list(im_array.shape)))
different_file_sizes[size] = different_file_sizes.get(size,0) + 1
plt.figure(figsize=(12,4))
sns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)
plt.xlabel('Image size', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.title("Image size present in train dataset")
plt.xticks(rotation='vertical')
plt.show()
test_path = "../input/test_stg1/"
file_names = check_output(["ls", test_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
for file_name in file_names:
size = "_".join(map(str,list(imread(test_path+file_name).shape)))
different_file_sizes[size] = different_file_sizes.get(size,0) + 1
plt.figure(figsize=(12,4))
sns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)
plt.xlabel('File size', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Image size present in test dataset")
plt.show()
import random
import matplotlib.animation as animation
from matplotlib import animation, rc
from IPython.display import HTML
random.seed(12345)
train_path = "../input/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
all_files = []
for sub_folder in sub_folders:
file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
selected_files = random.sample(file_names, 10)
for file_name in selected_files:
all_files.append([sub_folder,file_name])
fig = plt.figure()
sns.set_style("whitegrid", {'axes.grid' : False})
img_file = "".join([train_path, sub_folder, "/", file_name])
im = plt.imshow(imread(img_file), vmin=0, vmax=255)
def updatefig(ind):
sub_folder = all_files[ind][0]
file_name = all_files[ind][1]
img_file = "".join([train_path, sub_folder, "/", file_name])
im.set_array(imread(img_file))
plt.title("Species : "+sub_folder, fontsize=15)
return im,
ani = animation.FuncAnimation(fig, updatefig, frames=len(all_files))
ani.save('lb.gif', fps=1, writer='imagemagick')
#rc('animation', html='html5')
#HTML(ani.to_html5_video())
plt.show()
import random
from subprocess import check_output
from scipy.misc import imread
import numpy as np
np.random.seed(2016)
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
batch_size = 1
nb_classes = 8
nb_epoch = 1
img_rows, img_cols, img_rgb = 500, 500, 3
nb_filters = 4
pool_size = (2, 2)
kernel_size = (3, 3)
input_shape = (img_rows, img_cols, 3)
species_map_dict = {
'ALB':0,
'BET':1,
'DOL':2,
'LAG':3,
'NoF':4,
'OTHER':5,
'SHARK':6,
'YFT':7
}
def batch_generator_train(sample_size):
train_path = "../input/train/"
all_files = []
y_values = []
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
for sub_folder in sub_folders:
file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
for file_name in file_names:
all_files.append([sub_folder, '/', file_name])
y_values.append(species_map_dict[sub_folder])
number_of_images = range(len(all_files))
counter = 0
while True:
image_index = random.choice(number_of_images)
file_name = "".join([train_path] + all_files[image_index])
print(file_name)
y = [0]*8
y[y_values[image_index]] = 1
y = np.array(y).reshape(1,8)
im_array = imread(file_name)
X = np.zeros([1, img_rows, img_cols, img_rgb])
#X[:im_array.shape[0], :im_array.shape[1], 3] = im_array.copy().astype('float32')
X[0, :, :, :] = im_array[:500,:500,:].astype('float32')
X /= 255.
print(X.shape)
yield X,y
counter += 1
#if counter == sample_size:
# break
def batch_generator_test(all_files):
for file_name in all_files:
file_name = test_path + file_name
im_array = imread(file_name)
X = np.zeros([1, img_rows, img_cols, img_rgb])
X[0,:, :, :] = im_array[:500,:500,:].astype('float32')
X /= 255.
yield X
def keras_cnn_model():
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
return model
model = keras_cnn_model()
fit= model.fit_generator(
generator = batch_generator_train(100),
nb_epoch = 1,
samples_per_epoch = 100
)
test_path = "../input/test_stg1/"
all_files = []
file_names = check_output(["ls", test_path]).decode("utf8").strip().split('\n')
for file_name in file_names:
all_files.append(file_name)
#preds = model.predict_generator(generator=batch_generator_test(all_files), val_samples=len(all_files))
#out_df = pd.DataFrame(preds)
#out_df.columns = ['ALB', 'BET', 'DOL', 'LAG', 'NoF', 'OTHER', 'SHARK', 'YFT']
#out_df['image'] = all_files
#out_df.to_csv("sample_sub_keras.csv", index=False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So there are 8 folders present inside the train folder, one for each species.
Step2: So the number of files for species ALB (Albacore tuna) is much higher than other species.
Step3: Image Size
Step4: So 720_1280_3 is the most common image size available in the train data and 10 different sizes are available.
Step6: Test set also has a very similar distribution.
Step7: Basic CNN Model using Keras
|
15,335 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
from matplotlib import pyplot as plt
# Hyper Parameters
batch_size = 32
epochs = 10
num_classes=10
# Standard ConvNet
reg_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(6, 5, activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(16, 5, activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation='relu'),
tf.keras.layers.Dense(84, activation='relu'),
tf.keras.layers.Dense(num_classes, activation='softmax'),
])
# WeightNorm ConvNet
wn_model = tf.keras.Sequential([
tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(6, 5, activation='relu')),
tf.keras.layers.MaxPooling2D(2, 2),
tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(16, 5, activation='relu')),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(120, activation='relu')),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(84, activation='relu')),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(num_classes, activation='softmax')),
])
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
reg_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
reg_history = reg_model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
wn_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
wn_history = wn_model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
reg_accuracy = reg_history.history['accuracy']
wn_accuracy = wn_history.history['accuracy']
plt.plot(np.linspace(0, epochs, epochs), reg_accuracy,
color='red', label='Regular ConvNet')
plt.plot(np.linspace(0, epochs, epochs), wn_accuracy,
color='blue', label='WeightNorm ConvNet')
plt.title('WeightNorm Accuracy Comparison')
plt.legend()
plt.grid(True)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Addons Layers
Step2: Build Models
Step3: Load Data
Step4: Train Models
|
15,336 | <ASSISTANT_TASK:>
Python Code:
dec_hidden_state = [5,1,20]
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Let's visualize our decoder hidden state
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1)
annotation = [3,12,45] #e.g. Encoder hidden state
# Let's visualize the single annotation
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
def single_dot_attention_score(dec_hidden_state, enc_hidden_state):
# TODO: return the dot product of the two vectors
return np.dot(dec_hidden_state, enc_hidden_state)
single_dot_attention_score(dec_hidden_state, annotation)
annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]])
# Let's visualize our annotation (each column is an annotation)
ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
def dot_attention_score(dec_hidden_state, annotations):
# TODO: return the product of dec_hidden_state transpose and enc_hidden_states
return np.matmul(np.transpose(dec_hidden_state), annotations)
attention_weights_raw = dot_attention_score(dec_hidden_state, annotations)
attention_weights_raw
def softmax(x):
x = np.array(x, dtype=np.float128)
e_x = np.exp(x)
return e_x / e_x.sum(axis=0)
attention_weights = softmax(attention_weights_raw)
attention_weights
def apply_attention_scores(attention_weights, annotations):
# TODO: Multiple the annotations by their weights
return attention_weights * annotations
applied_attention = apply_attention_scores(attention_weights, annotations)
applied_attention
# Let's visualize our annotations after applying attention to them
ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
def calculate_attention_vector(applied_attention):
return np.sum(applied_attention, axis=1)
attention_vector = calculate_attention_vector(applied_attention)
attention_vector
# Let's visualize the attention context vector
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's visualize this vector
Step2: Our first scoring function will score a single annotation (encoder hidden state), which looks like this
Step3: IMPLEMENT
Step4: Annotations Matrix
Step5: And it can be visualized like this (each column is a hidden state of an encoder time step)
Step6: IMPLEMENT
Step7: Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step?
Step8: Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.119 and 0.880 respectively.
Step9: Let's visualize how the context vector looks now that we've applied the attention scores back on it
Step10: Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced.
|
15,337 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
import ipywidgets
from IPython.display import Image
HTML('../style/code_toggle.html')
Image(filename='figures/hart_26m_15m_2012-09-11_08511.jpg')
Image(filename='figures/kaira_lba_element.jpg')
Image(filename='../5_Imaging/figures/2013_kat7_20.jpg')
def WhichDiameter(wavelength=1., angres=(15e-3/3600)):
Compute the diameter of an aperture as a function of angular resolution and observing wavelength
c = 299792458. #spped of light, m/s
freq = c/(wavelength)/1e6 #
D = 1.22 * wavelength/np.radians(angres) # assuming a circular aperture
print '\n'
print 'At a frequency of %.3f MHz (Lambda = %.3f m)'%(freq, wavelength)
print 'the aperture diameter is D = %f m'%D
print 'to achieve an angular resolution of %f degrees / %f arcmin / %f arcsec'%(angres, angres*60, angres*3600)
print '\n'
w = ipywidgets.interact(WhichDiameter, angres=((15e-3/3600), 10, 1e-5), wavelength=(0.5e-6, 1, 1e-7))
Image(filename='figures/gbt_300foot_telescope.jpg')
Image(filename='figures/gbt_300foot_collapse.jpg')
Image(filename='figures/arecibo_observatory.jpg')
Image(filename='figures/cartoon_1.png')
Image(filename='figures/cartoon_2.png')
Image(filename='figures/cartoon_3.png')
Image(filename='figures/cartoon_4.png')
Image(filename='figures/cartoon_5.png')
Image(filename='figures/cartoon_6.png')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 1.10 The Limits of Single Dish Astronomy
Step3: Figure 1.10.1
Step4: Figure 1.10.2
Step6: Figure 1.10.3
Step7: 1.10.2 Physical limitations of single dishes
Step8: Figure 1.10.4a
Step9: Figure 1.10.4b
Step10: Figure 1.10.5
Step11: Figure 1.10.6a
Step12: Figure 1.10.6b
Step13: Figure 1.10.6c
Step14: Figure 1.10.6d
Step15: Figure 1.10.6e
|
15,338 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
x = np.hstack((np.linspace(-4,4,9), np.full(11, -5), np.linspace(-4,4,9), np.full(11, 5), [0]))
y = np.hstack((np.full(9,-5), np.linspace(-5, 5,11), np.full(9,5), np.linspace(-5,5,11), [0]))
f = np.hstack((np.zeros(20), np.zeros(20),[1.0]))
print(f)
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
xnew = np.linspace(-5, 5, 100)
ynew = np.linspace(-5, 5, 100)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Fnew = griddata((x, y), f , (Xnew, Ynew), method='cubic')
plt.imshow(Fnew, extent=(-5,5,-5,5))
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
plt.contourf(Xnew, Ynew, Fnew, cmap='hot')
plt.colorbar(label='Z')
plt.box(False)
plt.title("The interpolated 2d grid of our data.")
plt.xlabel('X')
plt.ylabel('Y');
assert True # leave this to grade the plot
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sparse 2d interpolation
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
|
15,339 | <ASSISTANT_TASK:>
Python Code:
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
train_X, train_Y, test_X, test_Y = load_2D_dataset()
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m))*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1<keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2<keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem Statement
Step3: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
Step4: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
Step5: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
Step7: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
Step9: Expected Output
Step10: Expected Output
Step11: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
Step13: Observations
Step15: Expected Output
Step16: Expected Output
Step17: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
|
15,340 | <ASSISTANT_TASK:>
Python Code:
# Put your code here!
# Put your code here!
# Put your code here!
# Put your code here!
# Put your code here!
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/VwY5ods4ugnwidnG2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Task 2
Step2: Task 3
Step3: Task 4
Step4: Task 5
Step6: Assignment wrapup
|
15,341 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Import and set ups{ display-mode: "form" }
import csv
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
dtype = tf.float64
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
plt.style.use('ggplot')
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
def load_insteval():
Loads the InstEval data set.
It contains 73,421 university lecture evaluations by students at ETH
Zurich with a total of 2,972 students, 2,160 professors and
lecturers, and several student, lecture, and lecturer attributes.
Implementation is built from the `observations` Python package.
Returns:
Tuple of np.ndarray `x_train` with 73,421 rows and 7 columns and
dictionary `metadata` of column headers (feature names).
url = ('https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/'
'lme4/InstEval.csv')
with requests.Session() as s:
download = s.get(url)
f = download.content.decode().splitlines()
iterator = csv.reader(f)
columns = next(iterator)[1:]
x_train = np.array([row[1:] for row in iterator], dtype=np.int)
metadata = {'columns': columns}
return x_train, metadata
data, metadata = load_insteval()
data = pd.DataFrame(data, columns=metadata['columns'])
data = data.rename(columns={'s': 'students',
'd': 'instructors',
'dept': 'departments',
'y': 'ratings'})
data['students'] -= 1 # start index by 0
# Remap categories to start from 0 and end at max(category).
data['instructors'] = data['instructors'].astype('category').cat.codes
data['departments'] = data['departments'].astype('category').cat.codes
train = data.sample(frac=0.8)
test = data.drop(train.index)
train.head()
get_value = lambda dataframe, key, dtype: dataframe[key].values.astype(dtype)
features_train = {
k: get_value(train, key=k, dtype=np.int32)
for k in ['students', 'instructors', 'departments', 'service']}
labels_train = get_value(train, key='ratings', dtype=np.float32)
features_test = {k: get_value(test, key=k, dtype=np.int32)
for k in ['students', 'instructors', 'departments', 'service']}
labels_test = get_value(test, key='ratings', dtype=np.float32)
num_students = max(features_train['students']) + 1
num_instructors = max(features_train['instructors']) + 1
num_departments = max(features_train['departments']) + 1
num_observations = train.shape[0]
print("Number of students:", num_students)
print("Number of instructors:", num_instructors)
print("Number of departments:", num_departments)
print("Number of observations:", num_observations)
class LinearMixedEffectModel(tf.Module):
def __init__(self):
# Set up fixed effects and other parameters.
# These are free parameters to be optimized in E-steps
self._intercept = tf.Variable(0., name="intercept") # alpha in eq
self._effect_service = tf.Variable(0., name="effect_service") # beta in eq
self._stddev_students = tfp.util.TransformedVariable(
1., bijector=tfb.Exp(), name="stddev_students") # sigma in eq
self._stddev_instructors = tfp.util.TransformedVariable(
1., bijector=tfb.Exp(), name="stddev_instructors") # sigma in eq
self._stddev_departments = tfp.util.TransformedVariable(
1., bijector=tfb.Exp(), name="stddev_departments") # sigma in eq
def __call__(self, features):
model = tfd.JointDistributionSequential([
# Set up random effects.
tfd.MultivariateNormalDiag(
loc=tf.zeros(num_students),
scale_identity_multiplier=self._stddev_students),
tfd.MultivariateNormalDiag(
loc=tf.zeros(num_instructors),
scale_identity_multiplier=self._stddev_instructors),
tfd.MultivariateNormalDiag(
loc=tf.zeros(num_departments),
scale_identity_multiplier=self._stddev_departments),
# This is the likelihood for the observed.
lambda effect_departments, effect_instructors, effect_students: tfd.Independent(
tfd.Normal(
loc=(self._effect_service * features["service"] +
tf.gather(effect_students, features["students"], axis=-1) +
tf.gather(effect_instructors, features["instructors"], axis=-1) +
tf.gather(effect_departments, features["departments"], axis=-1) +
self._intercept),
scale=1.),
reinterpreted_batch_ndims=1)
])
# To enable tracking of the trainable variables via the created distribution,
# we attach a reference to `self`. Since all TFP objects sub-class
# `tf.Module`, this means that the following is possible:
# LinearMixedEffectModel()(features_train).trainable_variables
# ==> tuple of all tf.Variables created by LinearMixedEffectModel.
model._to_track = self
return model
lmm_jointdist = LinearMixedEffectModel()
# Conditioned on feature/predictors from the training data
lmm_train = lmm_jointdist(features_train)
lmm_train.trainable_variables
lmm_train.resolve_graph()
target_log_prob_fn = lambda *x: lmm_train.log_prob(x + (labels_train,))
trainable_variables = lmm_train.trainable_variables
current_state = lmm_train.sample()[:-1]
# For debugging
target_log_prob_fn(*current_state)
# Set up E-step (MCMC).
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.015,
num_leapfrog_steps=3)
kernel_results = hmc.bootstrap_results(current_state)
@tf.function(autograph=False, jit_compile=True)
def one_e_step(current_state, kernel_results):
next_state, next_kernel_results = hmc.one_step(
current_state=current_state,
previous_kernel_results=kernel_results)
return next_state, next_kernel_results
optimizer = tf.optimizers.Adam(learning_rate=.01)
# Set up M-step (gradient descent).
@tf.function(autograph=False, jit_compile=True)
def one_m_step(current_state):
with tf.GradientTape() as tape:
loss = -target_log_prob_fn(*current_state)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
return loss
num_warmup_iters = 1000
num_iters = 1500
num_accepted = 0
effect_students_samples = np.zeros([num_iters, num_students])
effect_instructors_samples = np.zeros([num_iters, num_instructors])
effect_departments_samples = np.zeros([num_iters, num_departments])
loss_history = np.zeros([num_iters])
# Run warm-up stage.
for t in range(num_warmup_iters):
current_state, kernel_results = one_e_step(current_state, kernel_results)
num_accepted += kernel_results.is_accepted.numpy()
if t % 500 == 0 or t == num_warmup_iters - 1:
print("Warm-Up Iteration: {:>3} Acceptance Rate: {:.3f}".format(
t, num_accepted / (t + 1)))
num_accepted = 0 # reset acceptance rate counter
# Run training.
for t in range(num_iters):
# run 5 MCMC iterations before every joint EM update
for _ in range(5):
current_state, kernel_results = one_e_step(current_state, kernel_results)
loss = one_m_step(current_state)
effect_students_samples[t, :] = current_state[0].numpy()
effect_instructors_samples[t, :] = current_state[1].numpy()
effect_departments_samples[t, :] = current_state[2].numpy()
num_accepted += kernel_results.is_accepted.numpy()
loss_history[t] = loss.numpy()
if t % 500 == 0 or t == num_iters - 1:
print("Iteration: {:>4} Acceptance Rate: {:.3f} Loss: {:.3f}".format(
t, num_accepted / (t + 1), loss_history[t]))
@tf.function(autograph=False, jit_compile=True)
def run_k_e_steps(k, current_state, kernel_results):
_, next_state, next_kernel_results = tf.while_loop(
cond=lambda i, state, pkr: i < k,
body=lambda i, state, pkr: (i+1, *one_e_step(state, pkr)),
loop_vars=(tf.constant(0), current_state, kernel_results)
)
return next_state, next_kernel_results
plt.plot(loss_history)
plt.ylabel(r'Loss $-\log$ $p(y\mid\mathbf{x})$')
plt.xlabel('Iteration')
plt.show()
for i in range(7):
plt.plot(effect_instructors_samples[:, i])
plt.legend([i for i in range(7)], loc='lower right')
plt.ylabel('Instructor Effects')
plt.xlabel('Iteration')
plt.show()
lmm_test = lmm_jointdist(features_test)
[
effect_students_mean,
effect_instructors_mean,
effect_departments_mean,
] = [
np.mean(x, axis=0).astype(np.float32) for x in [
effect_students_samples,
effect_instructors_samples,
effect_departments_samples
]
]
# Get the posterior predictive distribution
(*posterior_conditionals, ratings_posterior), _ = lmm_test.sample_distributions(
value=(
effect_students_mean,
effect_instructors_mean,
effect_departments_mean,
))
ratings_prediction = ratings_posterior.mean()
plt.title("Residuals for Predicted Ratings on Test Set")
plt.xlim(-4, 4)
plt.ylim(0, 800)
plt.hist(ratings_prediction - labels_test, 75)
plt.show()
plt.title("Histogram of Student Effects")
plt.hist(effect_students_mean, 75)
plt.show()
plt.title("Histogram of Instructor Effects")
plt.hist(effect_instructors_mean, 75)
plt.show()
plt.title("Histogram of Department Effects")
plt.hist(effect_departments_mean, 75)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Mixed Effects Models
Step2: Make things Fast!
Step4: Note
Step5: We load and preprocess the data set. We hold out 20% of the data so we can evaluate our fitted model on unseen data points. Below we visualize the first few rows.
Step6: We set up the data set in terms of a features dictionary of inputs and a labels output corresponding to the ratings. Each feature is encoded as an integer and each label (evaluation rating) is encoded as a floating point number.
Step7: Model
Step8: As a Probabilistic graphical program, we can also visualize the model's structure in terms of its computational graph. This graph encodes dataflow across the random variables in the program, making explicit their relationships in terms of a graphical model (Jordan, 2003).
Step9: Parameter Estimation
Step10: We perform a warm-up stage, which runs one MCMC chain for a number of iterations so that training may be initialized within the posterior's probability mass. We then run a training loop. It jointly runs the E and M-steps and records values during training.
Step11: You can also write the warmup for-loop into a tf.while_loop, and the training step into a tf.scan or tf.while_loop for even faster inference. For example
Step12: Above, we did not run the algorithm until a convergence threshold was detected. To check whether training was sensible, we verify that the loss function indeed tends to converge over training iterations.
Step13: We also use a trace plot, which shows the Markov chain Monte Carlo algorithm's trajectory across specific latent dimensions. Below we see that specific instructor effects indeed meaningfully transition away from their initial state and explore the state space. The trace plot also indicates that the effects differ across instructors but with similar mixing behavior.
Step14: Criticism
Step15: Upon visual inspection, the residuals look somewhat standard-normally distributed. However, the fit is not perfect
Step16: To explore how the model makes individual predictions, we look at the histogram of effects for students, instructors, and departments. This lets us understand how individual elements in a data point's feature vector tends to influence the outcome.
|
15,342 | <ASSISTANT_TASK:>
Python Code:
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
!pip install pycocotools
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
print(detection_model.signatures['serving_default'].inputs)
detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name)
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make sure you have pycocotools installed
Step2: Get tensorflow/models or cd to parent directory of the repository.
Step3: Compile protobufs and install the object_detection package
Step4: Imports
Step5: Import the object detection module.
Step6: Patches
Step7: Model preparation
Step8: Loading label map
Step9: For the sake of simplicity we will test on 2 images
Step10: Detection
Step11: Check the model's input signature, it expects a batch of 3-color images of type uint8
Step12: And returns several outputs
Step13: Add a wrapper function to call the model, and cleanup the outputs
Step14: Run it on each test image and show the results
Step15: Instance Segmentation
Step16: The instance segmentation model includes a detection_masks output
|
15,343 | <ASSISTANT_TASK:>
Python Code:
# Import the Python libraries we need
import pandas as pd
# Define a variable for the accidents data file
f = './data/accidents1k.csv'
# Use read_csv() to import the data
accidents = pd.read_csv(f,
sep=',',
header=0,
index_col=False,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
# Run the head() command to see the top 5 rows of the data
accidents.head()
# Fill in the NaN values and check the DataFrame
accidents.fillna(value=0).head()
accidents.head()
# Fill the NaN values and ensure the DataFrame is indeed updated.
accidents.fillna(value=0,
inplace=True)
accidents.head()
# Let's take a look at the Date column
accidents['Date'].head()
# Define a function to convert a string to a date.
def convert_string_to_date(s):
Given a string, use the to_datetime function of Pandas to convert
it to a datetime, and then return it.
return pd.to_datetime(s)
# Apply the function to the Data column using the apply() function.
# Note: we do not have to explicitly pass in the value in the row being processed.
accidents['Date'] = accidents['Date'].apply(convert_string_to_date)
# Let's check it out.
accidents['Date'].head()
# Create a few dicts and a DataFrame to hold the mappings for the accident data
# Accident severity
severity = {
1: 'fatal',
2: 'serious',
3: 'fairly serious'
}
# Day of Week
days_of_week = {
1: 'Sunday',
2: 'Monday',
3: 'Tuesday',
4: 'Wednesday',
5: 'Thursday',
6: 'Friday',
7: 'Saturday',
0: 'Earlier this week'
}
# Road surfaces, updated to fit the sensationalism of a news broadcast
road_surfaces = {
1: 'dry',
2: 'wet',
3: 'snow-covered',
4: 'frosty',
5: 'flooded',
6: 'oily',
7: 'muddy',
-1: 'Data missing or out of range',
}
# Local Authority (District) - create a DataFrame from the CSV file
f = './data/accidents1k.csv'
# Use read_csv() to create a DataFrame from the local_authority_district mapping tab of the data dictionary.
# There are almost 1000 districts, hence I put them into a CSV file.
districts = pd.read_csv('./data/local_authority_district.csv',
sep=',',
header=0,
index_col=0,
parse_dates=False,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
# Define a function to create a one-sentence summary of the record.
def create_summary(day_of_week, accident_severity, road_surface, local_authority_district):
Create a one-sentence summary of the record.
Parameters: integer values for the Day_of_Week, Accident_Severity,
Road_Surface_Conditions and Local_Authority_(District) columns
# Perform the value lookups in the dicts and DataFrame
dow = days_of_week[day_of_week]
sev = severity[accident_severity]
road = road_surfaces[road_surface]
lad = districts.loc[local_authority_district].label
# If the day of week was specified use the first sentence variation, otherwise use the second
# Yes, this is redundant and we could optimize it. I leave that to you!
if day_of_week != 0:
return "On {} a {} accident occured on a {} road in {}".format(dow, sev, road, lad)
else:
return "{} a {} accident occured on a {} road in {}".format(dow, sev, road, lad)
# Create a new column in the DataFrame and fill it with the summary produced by the create_summary function
# Pass in the parameters needed to create the summary
accidents['summary'] = accidents.apply(lambda x: create_summary(x['Day_of_Week'],
x['Accident_Severity'],
x['Road_Surface_Conditions'],
x['Local_Authority_(District)']), axis=1)
# Let's see some results!
accidents['summary'].head()
# Let's view an entire summary
accidents['summary'][0]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Ensuring Your Changes Stick
Step2: Hrm, it looks like the DataFrame is updated, but is it? I think not!
Step3: What the heck?! The missing values haven't actually been updated. So how do we make the change stick? Using the inplace=True argument like so...
Step4: Success! The DataFrame has now been updated.
Step6: According to Pandas, the Date is an object, meaning it doesn't actually see it as a date. Let's change that.
Step8: Voila! Our data column is now a datetime.
|
15,344 | <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(-0.05, 0.05, 51), dataset='lc01')
print(b['ltte@compute'])
b['sma@binary'] = 100
b['q'] = 0.1
b.set_value_all('atm', 'blackbody')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.run_compute(irrad_method='none', ltte=False, model='ltte_off')
b.run_compute(irrad_method='none', ltte=True, model='ltte_on')
afig, mplfig = b.plot(show=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Now let's add a light curve dataset to see how ltte affects the timings of eclipses.
Step3: Relevant Parameters
Step4: Comparing with and without ltte
Step5: We'll just ignore the fact that this will be a completely unphysical system since we'll leave the radii and temperatures alone despite somewhat ridiculous masses - but since the masses and radii disagree so much, we'll have to abandon atmospheres and use blackbody.
|
15,345 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
x = tf.matmul([[1]], [[2, 3]])
print(x)
print(x.shape)
print(x.dtype)
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.config.experimental.list_physical_devices("GPU"))
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.config.experimental.list_physical_devices("GPU"):
print("On GPU:")
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write(Line 1
Line 2
Line 3
)
ds_file = tf.data.TextLineDataset(filename)
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Customization basics
Step2: Import TensorFlow
Step3: Tensors
Step4: Each tf.Tensor has a shape and a datatype
Step5: The most obvious differences between NumPy arrays and tf.Tensors are
Step6: GPU acceleration
Step7: Device Names
Step9: Datasets
Step10: Apply transformations
Step11: Iterate
|
15,346 | <ASSISTANT_TASK:>
Python Code:
prop={'dfunc':dist_metric, 'outfile':"gaussian_example.txt", 'verbose':1, 'adapt_t': True, 'mpi': True}
prop={'dfunc':dist_metric, 'outfile':"gaussian_example.txt", 'verbose':1, 'adapt_t': True, 'pert_kernel':2,\
'mpi':True,'mpi_splitcomm': True, 'num_abc': 4}
#to run on 4 threads
prop={'dfunc':dist_metric, 'outfile':"gaussian_example.txt", 'verbose':1, 'adapt_t': True, 'mp': True, 'num_proc':4}
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can then run the sample script in the examples folder to run the gaussian example on e.g. 16 processors using
Step2: An additional flag which needs to be set is 'num_abc', which specifies how many processors are to be allocated to the abc sampler. The rest of the processors are divided evenly amongst these processors to use in running the simulation. Note as processor 0 controls many of the communications it is not involved in the sampling.
|
15,347 | <ASSISTANT_TASK:>
Python Code:
MY_VERSION = 1,0
import sys
import datetime
import numpy as np
import pandas as pd
import pandas_datareader as pdr
import pandas_datareader.data as pdr_web
import quandl as ql
from matplotlib import __version__ as matplotlib_version
from seaborn import __version__ as seaborn_version
# Load Quandl API key
import json
with open('quandl_key.json','r') as f:
quandl_api_key = json.load(f)
ql.ApiConfig.api_key = quandl_api_key['API-key']
print('Verze notebooku:', '.'.join(map(str, MY_VERSION)))
print('Verze pythonu:', ".".join(map(str, sys.version_info[0:3])))
print('---')
print('NumPy:', np.__version__)
print('Pandas:', pd.__version__)
print('pandas-datareader:', pdr.__version__)
print('Quandl:', ql.version.VERSION)
print('Matplotlib:', matplotlib_version)
print('Seaborn:', seaborn_version)
start_date = datetime.datetime(2005, 1, 1)
end_date = datetime.datetime(2008, 6, 1)
data = pdr_web.DataReader("NYSEARCA:USO", 'google', start=start_date, end=end_date)
data.head()
import matplotlib.pyplot as plt
#data['Close'].plot()
data['Close'].plot(figsize=(18, 7), grid=True)
plt.show()
daily_pct_change = data['Close'].pct_change()
# nahrazení hodnota NA za nulu
daily_pct_change.fillna(0, inplace=True)
daily_pct_change.head()
#daily_log_returns = np.log(data['Close'].pct_change()+1)
daily_log_returns = np.log(daily_pct_change+1)
daily_log_returns.head()
monthly = data.resample('BM').apply(lambda x: x[-1])
monthly_pct = monthly['Close'].pct_change()
print(monthly_pct.head())
quarter = data.resample("4M").mean()
quarter_pct = quarter.pct_change()
quarter_pct.head()
daily_pct_change = data['Close'] / data['Close'].shift(1) - 1
print(daily_pct_change.head())
daily_log_returns_shift = np.log(data['Close'] / data['Close'].shift(1))
daily_log_returns_shift.head()
# bins = počet zobrazených hodnot/sloupců, což určuje jemnost grafu
daily_pct_change.hist(bins=50)
plt.show()
print(daily_pct_change.describe())
cum_daily_return = (1 + daily_pct_change).cumprod()
print(cum_daily_return.tail())
cum_daily_return.plot(figsize=(12,6))
plt.show()
cum_monthly_return = cum_daily_return.resample("M").mean()
cum_monthly_return.plot(figsize=(12,6))
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Seznam zdrojů
Step2: Základní vizualizace dat
Step3: Knihovna seaborn je nadstavba nad knihovou matplotlib, která poskytuje graficky přehlednější vzhled pro statistická data. Internetové stránky projektu Seaborn.
Step4: Pro lepší přehled výnosů v čase se bude hodit růst v logaritmickém měřítku. Využije se zde knihovny NumPy a její funkce log.
Step5: Změna v procentech pro jiné období
Step6: Alternativa k pct_change
Step7: Graf distribuce denního procentního zhodnocení
Step8: Kumulativní denní míra návratnosti/rentabilita
Step9: Nebo na měsíční kumulatvní návratnost
|
15,348 | <ASSISTANT_TASK:>
Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
!pip install -q apache_beam
!pip install -q 'scikit_learn~=0.23.0' # For gaussian_random_matrix.
!pip install -q annoy
import os
import sys
import pathlib
import pickle
from collections import namedtuple
from datetime import datetime
import numpy as np
import apache_beam as beam
import annoy
from sklearn.random_projection import gaussian_random_matrix
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
# TFT needs to be installed afterwards
!pip install -q tensorflow_transform==0.24
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
print('TF version: {}'.format(tf.__version__))
print('TF-Hub version: {}'.format(hub.__version__))
print('TF-Transform version: {}'.format(tft.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
!wget 'https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true' -O raw.tsv
!wc -l raw.tsv
!head raw.tsv
!rm -r corpus
!mkdir corpus
with open('corpus/text.txt', 'w') as out_file:
with open('raw.tsv', 'r') as in_file:
for line in in_file:
headline = line.split('\t')[1].strip().strip('"')
out_file.write(headline+"\n")
!tail corpus/text.txt
def load_module(module_url):
embed_module = hub.Module(module_url)
placeholder = tf.placeholder(dtype=tf.string)
embed = embed_module(placeholder)
session = tf.Session()
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
print('TF-Hub module is loaded.')
def _embeddings_fn(sentences):
computed_embeddings = session.run(
embed, feed_dict={placeholder: sentences})
return computed_embeddings
return _embeddings_fn
encoder = None
def embed_text(text, module_url, random_projection_matrix):
# Beam will run this function in different processes that need to
# import hub and load embed_fn (if not previously loaded)
global encoder
if not encoder:
encoder = hub.Module(module_url)
embedding = encoder(text)
if random_projection_matrix is not None:
# Perform random projection for the embedding
embedding = tf.matmul(
embedding, tf.cast(random_projection_matrix, embedding.dtype))
return embedding
def make_preprocess_fn(module_url, random_projection_matrix=None):
'''Makes a tft preprocess_fn'''
def _preprocess_fn(input_features):
'''tft preprocess_fn'''
text = input_features['text']
# Generate the embedding for the input text
embedding = embed_text(text, module_url, random_projection_matrix)
output_features = {
'text': text,
'embedding': embedding
}
return output_features
return _preprocess_fn
def create_metadata():
'''Creates metadata for the raw data'''
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import schema_utils
feature_spec = {'text': tf.FixedLenFeature([], dtype=tf.string)}
schema = schema_utils.schema_from_feature_spec(feature_spec)
metadata = dataset_metadata.DatasetMetadata(schema)
return metadata
def run_hub2emb(args):
'''Runs the embedding generation pipeline'''
options = beam.options.pipeline_options.PipelineOptions(**args)
args = namedtuple("options", args.keys())(*args.values())
raw_metadata = create_metadata()
converter = tft.coders.CsvCoder(
column_names=['text'], schema=raw_metadata.schema)
with beam.Pipeline(args.runner, options=options) as pipeline:
with tft_beam.Context(args.temporary_dir):
# Read the sentences from the input file
sentences = (
pipeline
| 'Read sentences from files' >> beam.io.ReadFromText(
file_pattern=args.data_dir)
| 'Convert to dictionary' >> beam.Map(converter.decode)
)
sentences_dataset = (sentences, raw_metadata)
preprocess_fn = make_preprocess_fn(args.module_url, args.random_projection_matrix)
# Generate the embeddings for the sentence using the TF-Hub module
embeddings_dataset, _ = (
sentences_dataset
| 'Extract embeddings' >> tft_beam.AnalyzeAndTransformDataset(preprocess_fn)
)
embeddings, transformed_metadata = embeddings_dataset
# Write the embeddings to TFRecords files
embeddings | 'Write embeddings to TFRecords' >> beam.io.tfrecordio.WriteToTFRecord(
file_path_prefix='{}/emb'.format(args.output_dir),
file_name_suffix='.tfrecords',
coder=tft.coders.ExampleProtoCoder(transformed_metadata.schema))
def generate_random_projection_weights(original_dim, projected_dim):
random_projection_matrix = None
if projected_dim and original_dim > projected_dim:
random_projection_matrix = gaussian_random_matrix(
n_components=projected_dim, n_features=original_dim).T
print("A Gaussian random weight matrix was creates with shape of {}".format(random_projection_matrix.shape))
print('Storing random projection matrix to disk...')
with open('random_projection_matrix', 'wb') as handle:
pickle.dump(random_projection_matrix,
handle, protocol=pickle.HIGHEST_PROTOCOL)
return random_projection_matrix
module_url = 'https://tfhub.dev/google/universal-sentence-encoder/2' #@param {type:"string"}
projected_dim = 64 #@param {type:"number"}
import tempfile
output_dir = pathlib.Path(tempfile.mkdtemp())
temporary_dir = pathlib.Path(tempfile.mkdtemp())
g = tf.Graph()
with g.as_default():
original_dim = load_module(module_url)(['']).shape[1]
random_projection_matrix = None
if projected_dim:
random_projection_matrix = generate_random_projection_weights(
original_dim, projected_dim)
args = {
'job_name': 'hub2emb-{}'.format(datetime.utcnow().strftime('%y%m%d-%H%M%S')),
'runner': 'DirectRunner',
'batch_size': 1024,
'data_dir': 'corpus/*.txt',
'output_dir': output_dir,
'temporary_dir': temporary_dir,
'module_url': module_url,
'random_projection_matrix': random_projection_matrix,
}
print("Pipeline args are set.")
args
!rm -r {output_dir}
!rm -r {temporary_dir}
print("Running pipeline...")
%time run_hub2emb(args)
print("Pipeline is done.")
!ls {output_dir}
import itertools
embed_file = os.path.join(output_dir, 'emb-00000-of-00001.tfrecords')
sample = 5
record_iterator = tf.io.tf_record_iterator(path=embed_file)
for string_record in itertools.islice(record_iterator, sample):
example = tf.train.Example()
example.ParseFromString(string_record)
text = example.features.feature['text'].bytes_list.value
embedding = np.array(example.features.feature['embedding'].float_list.value)
print("Embedding dimensions: {}".format(embedding.shape[0]))
print("{}: {}".format(text, embedding[:10]))
def build_index(embedding_files_pattern, index_filename, vector_length,
metric='angular', num_trees=100):
'''Builds an ANNOY index'''
annoy_index = annoy.AnnoyIndex(vector_length, metric=metric)
# Mapping between the item and its identifier in the index
mapping = {}
embed_files = tf.gfile.Glob(embedding_files_pattern)
print('Found {} embedding file(s).'.format(len(embed_files)))
item_counter = 0
for f, embed_file in enumerate(embed_files):
print('Loading embeddings in file {} of {}...'.format(
f+1, len(embed_files)))
record_iterator = tf.io.tf_record_iterator(
path=embed_file)
for string_record in record_iterator:
example = tf.train.Example()
example.ParseFromString(string_record)
text = example.features.feature['text'].bytes_list.value[0].decode("utf-8")
mapping[item_counter] = text
embedding = np.array(
example.features.feature['embedding'].float_list.value)
annoy_index.add_item(item_counter, embedding)
item_counter += 1
if item_counter % 100000 == 0:
print('{} items loaded to the index'.format(item_counter))
print('A total of {} items added to the index'.format(item_counter))
print('Building the index with {} trees...'.format(num_trees))
annoy_index.build(n_trees=num_trees)
print('Index is successfully built.')
print('Saving index to disk...')
annoy_index.save(index_filename)
print('Index is saved to disk.')
print("Index file size: {} GB".format(
round(os.path.getsize(index_filename) / float(1024 ** 3), 2)))
annoy_index.unload()
print('Saving mapping to disk...')
with open(index_filename + '.mapping', 'wb') as handle:
pickle.dump(mapping, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('Mapping is saved to disk.')
print("Mapping file size: {} MB".format(
round(os.path.getsize(index_filename + '.mapping') / float(1024 ** 2), 2)))
embedding_files = "{}/emb-*.tfrecords".format(output_dir)
embedding_dimension = projected_dim
index_filename = "index"
!rm {index_filename}
!rm {index_filename}.mapping
%time build_index(embedding_files, index_filename, embedding_dimension)
!ls
index = annoy.AnnoyIndex(embedding_dimension)
index.load(index_filename, prefault=True)
print('Annoy index is loaded.')
with open(index_filename + '.mapping', 'rb') as handle:
mapping = pickle.load(handle)
print('Mapping file is loaded.')
def find_similar_items(embedding, num_matches=5):
'''Finds similar items to a given embedding in the ANN index'''
ids = index.get_nns_by_vector(
embedding, num_matches, search_k=-1, include_distances=False)
items = [mapping[i] for i in ids]
return items
# Load the TF-Hub module
print("Loading the TF-Hub module...")
g = tf.Graph()
with g.as_default():
embed_fn = load_module(module_url)
print("TF-Hub module is loaded.")
random_projection_matrix = None
if os.path.exists('random_projection_matrix'):
print("Loading random projection matrix...")
with open('random_projection_matrix', 'rb') as handle:
random_projection_matrix = pickle.load(handle)
print('random projection matrix is loaded.')
def extract_embeddings(query):
'''Generates the embedding for the query'''
query_embedding = embed_fn([query])[0]
if random_projection_matrix is not None:
query_embedding = query_embedding.dot(random_projection_matrix)
return query_embedding
extract_embeddings("Hello Machine Learning!")[:10]
#@title { run: "auto" }
query = "confronting global challenges" #@param {type:"string"}
print("Generating embedding for the query...")
%time query_embedding = extract_embeddings(query)
print("")
print("Finding relevant items in the index...")
%time items = find_similar_items(query_embedding, 10)
print("")
print("Results:")
print("=========")
for item in items:
print(item)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 最近傍とテキスト埋め込みによるセマンティック検索
Step2: 必要なライブラリをインポートします。
Step3: 1. サンプルデータをダウンロードする
Step4: 単純化するため、見出しのテキストのみを維持し、発行日は削除します。
Step5: TF-Hub モジュールを読み込むためのヘルパー関数
Step6: 2. データの埋め込みを生成する
Step7: TFT preprocess_fn メソッドの作成
Step8: データセットのメタデータの作成
Step9: Beam パイプライン
Step10: ランダムプロジェクションの重み行列を生成する
Step11: パラメータの設定
Step12: パイプラインの実行
Step13: 生成された埋め込みをいくつか読み取ります。
Step14: 3. 埋め込みの ANN インデックスを構築する
Step15: 4. インデックスを使って、類似性の一致を実施する
Step16: 類似性の一致メソッド
Step17: 特定のクエリから埋め込みを抽出する
Step18: クエリを入力して、類似性の最も高いアイテムを検索する
|
15,349 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
import scipy
import scipy.io
from scipy import sparse
import uncurl
data_z = scipy.io.loadmat('../data/GSE60361_dat.mat')
data_10x = scipy.io.loadmat('../data/10x_pooled_400.mat')
data_z_mat = data_z['Dat']
genes_z = uncurl.max_variance_genes(data_z_mat, 1, 0.2)
data_z_mat = data_z_mat[genes_z, :]
data_10x_mat = data_10x['data']
genes_10x = uncurl.max_variance_genes(data_10x_mat, 1, 0.2)
data_10x_mat = data_10x_mat[genes_10x, :]
data_z_mat.shape
data_10x_mat.shape
p_impute = 0.2
data_z_mat = sparse.coo_matrix(data_z_mat)
data_10x_mat = sparse.coo_matrix(data_10x_mat)
indices_z = random.sample(range(len(data_z_mat.data)), int(p_impute*len(data_z_mat.data)))
indices_10x = random.sample(range(len(data_10x_mat.data)), int(p_impute*len(data_10x_mat.data)))
# save deleted data for comparison
true_values_z = data_z_mat.data[indices_z]
sampled_rows_z = data_z_mat.row[indices_z]
sampled_cols_z = data_z_mat.col[indices_z]
true_values_10x = data_10x_mat.data[indices_10x]
sampled_rows_10x = data_10x_mat.row[indices_10x]
sampled_cols_10x = data_10x_mat.col[indices_10x]
# do the data deletion
data_z_mat.data[indices_z] = 0
data_z_mat.eliminate_zeros()
data_z_mat = sparse.csc_matrix(data_z_mat)
data_10x_mat.data[indices_10x] = 0
data_10x_mat.eliminate_zeros()
data_10x_mat = sparse.csc_matrix(data_10x_mat)
(data_z_mat.max(1).toarray() == 0).sum()
(data_10x_mat.max(1).toarray() == 0).sum()
k_z = 7
%time M_z, W_z, ll = uncurl.poisson_estimate_state(data_z_mat, k_z, threads=4, disp=False)
k_10x = 8
%time M_10x, W_10x, ll = uncurl.poisson_estimate_state(data_10x_mat, k_10x, threads=4, disp=False)
def calculate_imputation_error(sampled_rows, sampled_cols, indices, true_values, new_matrix):
Returns imputed values, rmse, correlation, spearman.
imputed_values = []
for i in range(len(indices)):
imputed_value = new_matrix[sampled_rows[i], sampled_cols[i]]
imputed_values.append(imputed_value)
imputed_values = np.array(imputed_values)
rmse = np.sqrt(np.mean(((imputed_values - true_values)/true_values)**2))
print('rmse:', rmse)
correlation = np.corrcoef(imputed_values, true_values)[0,1]
print('correlation:', correlation)
spearman = scipy.stats.spearmanr(imputed_values, true_values).correlation
print('spearman:', spearman)
return imputed_values, rmse, correlation, spearman
print('Results for 10x imputation with UNCURL:')
MW = M_10x.dot(W_10x)
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_10x,
sampled_cols_10x,
indices_10x,
true_values_10x, MW)
print()
print('Results for Zeisel imputation with UNCURL:')
MW = M_z.dot(W_z)
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_z,
sampled_cols_z,
indices_z,
true_values_z, MW)
import magic
import pandas as pd
table_z = pd.DataFrame(data_z_mat.toarray().T)
table_z.columns = table_z.columns.astype(str)
scdata = magic.mg.SCData(table_z, data_type='sc-seq')
scdata = scdata.normalize_scseq_data()
scdata.run_magic(n_pca_components=k_z, random_pca=True,
t=6, k=30, ka=10, epsilon=1, rescale_percent=99)
magic_output_z = scdata.magic.data.as_matrix().T
table_10x = pd.DataFrame(data_10x_mat.toarray().T)
table_10x.columns = table_10x.columns.astype(str)
scdata = magic.mg.SCData(table_10x, data_type='sc-seq')
scdata = scdata.normalize_scseq_data()
scdata.run_magic(n_pca_components=k_10x, random_pca=True,
t=6, k=30, ka=10, epsilon=1, rescale_percent=99)
magic_output_10x = scdata.magic.data.as_matrix().T
print('Results for 10x imputation with Magic:')
MW = magic_output_10x
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_10x,
sampled_cols_10x,
indices_10x,
true_values_10x, MW)
print('Results for Zeisel imputation with Magic:')
imputed_values = []
MW = magic_output_z
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_z,
sampled_cols_z,
indices_z,
true_values_z, MW)
p_impute_vals = [0.2, 0.1, 0.05, 0.01, 0.005, 0.001]
results = pd.DataFrame(columns=['dataset', 'method', 'metric', 'p_impute'])
data_points = []
for p_impute in p_impute_vals:
print()
print('p_impute:', str(p_impute))
data_z_mat = data_z['Dat']
genes_z = uncurl.max_variance_genes(data_z_mat, 1, 0.2)
data_z_mat = data_z_mat[genes_z, :]
data_10x_mat = data_10x['data']
genes_10x = uncurl.max_variance_genes(data_10x_mat, 1, 0.2)
data_10x_mat = data_10x_mat[genes_10x, :]
data_z_mat = sparse.coo_matrix(data_z_mat)
data_10x_mat = sparse.coo_matrix(data_10x_mat)
indices_z = random.sample(range(len(data_z_mat.data)), int(p_impute*len(data_z_mat.data)))
indices_10x = random.sample(range(len(data_10x_mat.data)), int(p_impute*len(data_10x_mat.data)))
# save deleted data for comparison
true_values_z = data_z_mat.data[indices_z]
sampled_rows_z = data_z_mat.row[indices_z]
sampled_cols_z = data_z_mat.col[indices_z]
true_values_10x = data_10x_mat.data[indices_10x]
sampled_rows_10x = data_10x_mat.row[indices_10x]
sampled_cols_10x = data_10x_mat.col[indices_10x]
# do the data deletion
data_z_mat.data[indices_z] = 0
data_z_mat.eliminate_zeros()
data_z_mat = sparse.csc_matrix(data_z_mat)
data_10x_mat.data[indices_10x] = 0
data_10x_mat.eliminate_zeros()
data_10x_mat = sparse.csc_matrix(data_10x_mat)
# run uncurl
M_10x, W_10x, ll = uncurl.poisson_estimate_state(data_10x_mat, k_10x, threads=4, disp=False)
M_z, W_z, ll = uncurl.poisson_estimate_state(data_z_mat, k_z, threads=4, disp=False)
print('Results for 10x imputation with UNCURL:')
MW = M_10x.dot(W_10x)
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_10x,
sampled_cols_10x,
indices_10x,
true_values_10x, MW)
data_points.append(['10x', 'uncurl', 'rmse', rmse])
data_points.append(['10x', 'uncurl', 'corr', corr])
data_points.append(['10x', 'uncurl', 'sp', sp])
print()
print('Results for Zeisel imputation with UNCURL:')
MW = M_z.dot(W_z)
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_z,
sampled_cols_z,
indices_z,
true_values_z, MW)
data_points.append(['z', 'uncurl', 'rmse', rmse])
data_points.append(['z', 'uncurl', 'corr', corr])
data_points.append(['z', 'uncurl', 'sp', sp])
print()
# run MAGIC
table_z = pd.DataFrame(data_z_mat.toarray().T)
table_z.columns = table_z.columns.astype(str)
scdata = magic.mg.SCData(table_z, data_type='sc-seq')
scdata = scdata.normalize_scseq_data()
scdata.run_magic(n_pca_components=k_z, random_pca=True,
t=6, k=30, ka=10, epsilon=1, rescale_percent=99)
magic_output_z = scdata.magic.data.as_matrix().T
table_10x = pd.DataFrame(data_10x_mat.toarray().T)
table_10x.columns = table_10x.columns.astype(str)
scdata = magic.mg.SCData(table_10x, data_type='sc-seq')
scdata = scdata.normalize_scseq_data()
scdata.run_magic(n_pca_components=k_10x, random_pca=True,
t=6, k=30, ka=10, epsilon=1, rescale_percent=99)
magic_output_10x = scdata.magic.data.as_matrix().T
print()
print('Results for 10x imputation with Magic:')
MW = magic_output_10x
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_10x,
sampled_cols_10x,
indices_10x,
true_values_10x, MW)
data_points.append(['10x', 'magic', 'rmse', rmse])
data_points.append(['10x', 'magic', 'corr', corr])
data_points.append(['10x', 'magic', 'sp', sp])
print()
print('Results for Zeisel imputation with Magic:')
imputed_values = []
MW = magic_output_z
imputed_vals, rmse, corr, sp = calculate_imputation_error(sampled_rows_z,
sampled_cols_z,
indices_z,
true_values_z, MW)
data_points.append(['z', 'magic', 'rmse', rmse])
data_points.append(['z', 'magic', 'corr', corr])
data_points.append(['z', 'magic', 'sp', sp])
print()
results = pd.DataFrame(data_points, columns=['dataset', 'method', 'metric', 'p_impute'])
results.head()
# okay so this was an error... we have to correct the results.
results['value'] = results['p_impute']
results['p_impute'] = [x for y in [[p]*12 for p in p_impute_vals] for x in y]
results.head()
uncurl_rmse_10x = results[(results.method=='uncurl') &\
(results.dataset=='10x') &\
(results.metric=='rmse')]
uncurl_rmse_10x = uncurl_rmse_10x[['p_impute', 'value']]
uncurl_corr_10x = results[(results.method=='uncurl') &\
(results.dataset=='10x') &\
(results.metric=='corr')]
uncurl_corr_10x = uncurl_corr_10x[['p_impute', 'value']]
uncurl_sp_10x = results[(results.method=='uncurl') &\
(results.dataset=='10x') &\
(results.metric=='sp')]
uncurl_sp_10x = uncurl_sp_10x[['p_impute', 'value']]
uncurl_rmse_z = results[(results.method=='uncurl') &\
(results.dataset=='z') &\
(results.metric=='rmse')]
uncurl_rmse_z = uncurl_rmse_z[['p_impute', 'value']]
uncurl_corr_z = results[(results.method=='uncurl') &\
(results.dataset=='z') &\
(results.metric=='corr')]
uncurl_corr_z = uncurl_corr_z[['p_impute', 'value']]
uncurl_sp_z = results[(results.method=='uncurl') &\
(results.dataset=='z') &\
(results.metric=='sp')]
uncurl_sp_z = uncurl_sp_z[['p_impute', 'value']]
magic_rmse_10x = results[(results.method=='magic') &\
(results.dataset=='10x') &\
(results.metric=='rmse')]
magic_rmse_10x = magic_rmse_10x[['p_impute', 'value']]
magic_corr_10x = results[(results.method=='magic') &\
(results.dataset=='10x') &\
(results.metric=='corr')]
magic_corr_10x = magic_corr_10x[['p_impute', 'value']]
magic_sp_10x = results[(results.method=='magic') &\
(results.dataset=='10x') &\
(results.metric=='sp')]
magic_sp_10x = magic_sp_10x[['p_impute', 'value']]
magic_rmse_z = results[(results.method=='magic') &\
(results.dataset=='z') &\
(results.metric=='rmse')]
magic_rmse_z = magic_rmse_z[['p_impute', 'value']]
magic_corr_z = results[(results.method=='magic') &\
(results.dataset=='z') &\
(results.metric=='corr')]
magic_corr_z = magic_corr_z[['p_impute', 'value']]
magic_sp_z = results[(results.method=='magic') &\
(results.dataset=='z') &\
(results.metric=='sp')]
magic_sp_z = magic_sp_z[['p_impute', 'value']]
plt.figure(dpi=100)
plt.semilogx(uncurl_rmse_10x.p_impute, uncurl_rmse_10x.value, '--o', label='uncurl')
plt.semilogx(magic_rmse_10x.p_impute, magic_rmse_10x.value, '--o', label='magic')
plt.legend()
plt.title('RMSE of imputed values for UNCURL and Magic')
plt.xlabel('Fraction removed')
plt.ylabel('RMSE')
plt.figure(dpi=100)
plt.semilogx(uncurl_corr_10x.p_impute, uncurl_corr_10x.value, '--o', label='uncurl')
plt.semilogx(magic_corr_10x.p_impute, magic_corr_10x.value, '--o', label='magic')
plt.legend()
plt.title('Correlation of imputed values vs actual values for UNCURL and Magic')
plt.xlabel('Fraction removed')
plt.ylabel('Correlation')
plt.figure(dpi=100)
plt.semilogx(uncurl_sp_10x.p_impute, uncurl_sp_10x.value, '--o', label='uncurl')
plt.semilogx(magic_sp_10x.p_impute, magic_sp_10x.value, '--o', label='magic')
plt.legend()
plt.title('Spearman Correlation of imputed values vs actual values for UNCURL and Magic')
plt.xlabel('Fraction removed')
plt.ylabel('Spearman Correlation')
plt.figure(dpi=100)
plt.semilogx(uncurl_rmse_z.p_impute, uncurl_rmse_z.value, '--o', label='uncurl')
plt.semilogx(magic_rmse_z.p_impute, magic_rmse_z.value, '--o', label='magic')
plt.legend()
plt.title('RMSE of imputed values for UNCURL and Magic')
plt.xlabel('Fraction removed')
plt.ylabel('RMSE')
plt.figure(dpi=100)
plt.semilogx(uncurl_corr_z.p_impute, uncurl_corr_z.value, '--o', label='uncurl')
plt.semilogx(magic_corr_z.p_impute, magic_corr_z.value, '--o', label='magic')
plt.legend()
plt.title('Correlation of imputed values vs actual values for UNCURL and Magic')
plt.xlabel('Fraction removed')
plt.ylabel('Correlation')
plt.figure(dpi=100)
plt.semilogx(uncurl_sp_z.p_impute, uncurl_sp_z.value, '--o', label='uncurl')
plt.semilogx(magic_sp_z.p_impute, magic_sp_z.value, '--o', label='magic')
plt.legend()
plt.title('Spearman Correlation of imputed values vs actual values for UNCURL and Magic')
plt.xlabel('Fraction removed')
plt.ylabel('Spearman Correlation')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: Artificially inducing missing values
Step3: Perform UNCURL for imputation
Step5: Imputation with MW
Step6: Imputation test with MAGIC
Step7: Try with varying missing value fractions
|
15,350 | <ASSISTANT_TASK:>
Python Code:
# Load libraries
import numpy as np
from sklearn import linear_model, datasets
from sklearn.model_selection import GridSearchCV
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Create logistic regression
logistic = linear_model.LogisticRegression()
# Create regularization penalty space
penalty = ['l1', 'l2']
# Create regularization hyperparameter space
C = np.logspace(0, 4, 10)
# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)
# Create grid search using 5-fold cross validation
clf = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0)
# Fit grid search
best_model = clf.fit(X, y)
# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])
# Predict target vector
best_model.predict(X)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Dataset
Step2: Create Logistic Regression
Step3: Create Hyperparameter Search Space
Step4: Create Grid Search
Step5: Conduct Grid Search
Step6: View Hyperparameter Values Of Best Model
Step7: Predict Using Best Model
|
15,351 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pylab as plt
plt.rcParams['figure.figsize'] = (15.0, 10.0)
import pandas as pd
import seaborn as sns
data = pd.read_csv("978-3-319-12065-2/chapter-4/teams.csv")
data.head()
data.plot(kind='scatter',x="payroll",y="wins")
sns.regplot("payroll","wins",data,fit_reg=False)
def label_point_orig(x, y, val, ax):
a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)
for i, point in a.iterrows():
ax.text(point['x'], point['y'], str(point['val']))
ax = data.plot(kind='scatter',x="payroll",y="wins")
label_point_orig(data.payroll,data.wins,data.code,ax)
ax1 = sns.regplot("payroll","wins",data,fit_reg=False)
label_point_orig(data.payroll,data.wins,data.code,ax1)
data_al = data[data.league == "AL"]
data_nl = data[data.league == "NL"]
fig, ax = plt.subplots()
ax.plot(data_al.payroll, data_al.wins, marker='o',linestyle='',label="AL")
ax.plot(data_nl.payroll, data_nl.wins, marker='s',linestyle='',label="NL")
ax.legend()
ax.set_xlabel("payroll")
ax.set_ylabel("wins")
data_groups = data.groupby('league')
markers = ["o","s"]
mindx = 0
fig, ax = plt.subplots()
for name, group in data_groups:
ax.plot(group.payroll, group.wins, marker=markers[mindx], linestyle='', ms=12, label=name)
mindx += 1
ax.legend()
ax.set_xlabel("payroll")
ax.set_ylabel("wins")
data_gf_al = data[data.pct >= 0.5 ]
data_gf_nl = data[data.pct <= 0.5]
fig, ax = plt.subplots()
ax.plot(data_gf_al.payroll,data_gf_al.wins,marker='o',linestyle='')
ax.plot(data_gf_nl.payroll,data_gf_nl.wins,marker='s',linestyle='')
ax.plot([data.payroll.min(),data.payroll.max()],[81,81])
data_groups = data.groupby('league')
markers = ["o","^"]
mindx = 0
fig, ax = plt.subplots()
for name, group in data_groups:
ax.plot(group.payroll, group.wins, marker=markers[mindx], linestyle='', ms=12, label=name)
mindx += 1
ax.legend()
ax.set_xlabel("payroll")
ax.set_ylabel("wins")
data[data.league == "NL"].payroll.sum()
data.groupby("league").payroll.sum()
data.groupby(["league","division"]).payroll.sum()
data.groupby("league").payroll.sum().plot(kind='bar')
data.groupby(["league","division"]).payroll.sum().plot(kind='bar',stacked=True)
data.groupby(["league","division"]).payroll.sum().unstack("division").plot(kind='bar',stacked=True)
data.groupby(["league","division"]).payroll.sum().unstack("division").plot(kind='bar')
data.groupby(["league"]).payroll.sum().plot(kind='pie')
data.groupby(["league","division"]).payroll.sum().plot(kind='pie')
data_al = data[data.league == "AL"]
data_nl = data[data.league == "NL"]
fig, ax = plt.subplots(2)
ax[0].plot(data_al.payroll, data_al.wins, marker='o', linestyle='', ms=12, label="AL")
ax[1].plot(data_nl.payroll, data_nl.wins, marker='o', linestyle='', ms=12, label="NL")
from bokeh.sampledata.iris import flowers
from bokeh.plotting import figure, show, output_file,output_notebook
output_notebook()
colormap = {'AL': 'red', 'NL': 'green'}
data['color'] = data['league'].map(lambda x: colormap[x])
p = figure(title = "Payroll vs Wins")
p.xaxis.axis_label = 'Wins'
p.yaxis.axis_label = 'Payroll'
p.circle(data.payroll,data.wins,color=data["color"],size=10)
show(p)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 4.1 Introduction
Step2: 4.2 Basic Visualizations
Step3: 4.2.1.1 Labeling Data Points
Step4: 4.2.1.2 Points and Lines
Step5: 4.2.2 Visualizing Aggregate Values with Bar plots and Pie charts
Step6: 4.2.2.2 Bar Plots
Step7: 4.2.3 Common Plotting Tasks
Step8: 4.4 Interactive Visualizations Using Bokeh
|
15,352 | <ASSISTANT_TASK:>
Python Code:
x = np.array([1, 2, 3, 4])
x, np.shape(x)
x = np.array([[1], [2], [3], [4]])
x, np.shape(x)
X = np.array([[11,12,13],[21,22,23]])
X
np.diag([3, 4, 1])
np.identity(3)
np.eye(5)
X = np.array([[11,12,13],[21,22,23]])
X
X.T
x = np.array([10, 11, 12, 13, 14])
x
y = np.array([0, 1, 2, 3, 4])
y
x + y
x - y
x = np.array([1,2,3])
y = np.array([4,5,6])
np.dot(x,y)
x = np.array([[1], [2], [3]])
y = np.array([[4], [5], [6]])
np.dot(x.T, y)
x, y, x.T
A = np.array([[1, 2, 3], [4, 5, 6]])
B = np.array([[1, 2], [3, 4], [5, 6]])
C = np.dot(A, B)
A
B
C
from sklearn.datasets import make_regression
X, y = make_regression(4, 3)
X
y
w = np.linalg.lstsq(X, y)[0]
w
e = y - np.dot(X, w)
e
np.dot(e.T,e)
x = np.array([1,2,3])
x
A = np.arange(1, 10).reshape(3,3)
A
np.dot(x, A)
np.dot(np.dot(x, A), x)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 행렬
Step2: 특수한 행렬
Step3: 대각 행렬 중에서도 모든 대각 성분의 값이 1인 대각 행렬을 단위 행렬(identity matrix)이라고 한다. 단위 행렬은 보통 알파벳 대문자 $I$로 표기하는 경우가 많다.
Step4: 연산
Step5: 행렬의 행 표기법과 열 표기법
Step6: 벡터 곱셈
Step7: 제곱합
Step8: 그럼 이러한 행렬의 곱셈은 데이터 분석에서 어떤 경우에 사용될까. 몇가지 예를 살펴본다.
Step9: 잔차 제곱합
Step10: 이차 형식
|
15,353 | <ASSISTANT_TASK:>
Python Code:
from pathlib import Path
from freqtrade.configuration import Configuration
# Customize these according to your needs.
# Initialize empty configuration object
config = Configuration.from_files([])
# Optionally, use existing configuration file
# config = Configuration.from_files(["config.json"])
# Define some constants
config["timeframe"] = "5m"
# Name of the strategy class
config["strategy"] = "SampleStrategy"
# Location of the data
data_location = Path(config['user_data_dir'], 'data', 'binance')
# Pair to analyze - Only use one pair here
pair = "BTC_USDT"
# Load data using values set above
from freqtrade.data.history import load_pair_history
candles = load_pair_history(datadir=data_location,
timeframe=config["timeframe"],
pair=pair)
# Confirm success
print("Loaded " + str(len(candles)) + f" rows of data for {pair} from {data_location}")
candles.head()
# Load strategy using values set above
from freqtrade.resolvers import StrategyResolver
strategy = StrategyResolver.load_strategy(config)
# Generate buy/sell signals using strategy
df = strategy.analyze_ticker(candles, {'pair': pair})
df.tail()
# Report results
print(f"Generated {df['buy'].sum()} buy signals")
data = df.set_index('date', drop=False)
data.tail()
from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats
# if backtest_dir points to a directory, it'll automatically load the last backtest file.
backtest_dir = config["user_data_dir"] / "backtest_results"
# backtest_dir can also point to a specific file
# backtest_dir = config["user_data_dir"] / "backtest_results/backtest-result-2020-07-01_20-04-22.json"
# You can get the full backtest statistics by using the following command.
# This contains all information used to generate the backtest result.
stats = load_backtest_stats(backtest_dir)
strategy = 'SampleStrategy'
# All statistics are available per strategy, so if `--strategy-list` was used during backtest, this will be reflected here as well.
# Example usages:
print(stats['strategy'][strategy]['results_per_pair'])
# Get pairlist used for this backtest
print(stats['strategy'][strategy]['pairlist'])
# Get market change (average change of all pairs from start to end of the backtest period)
print(stats['strategy'][strategy]['market_change'])
# Maximum drawdown ()
print(stats['strategy'][strategy]['max_drawdown'])
# Maximum drawdown start and end
print(stats['strategy'][strategy]['drawdown_start'])
print(stats['strategy'][strategy]['drawdown_end'])
# Get strategy comparison (only relevant if multiple strategies were compared)
print(stats['strategy_comparison'])
# Load backtested trades as dataframe
trades = load_backtest_data(backtest_dir)
# Show value-counts per pair
trades.groupby("pair")["sell_reason"].value_counts()
from freqtrade.data.btanalysis import load_trades_from_db
# Fetch trades from database
trades = load_trades_from_db("sqlite:///tradesv3.sqlite")
# Display results
trades.groupby("pair")["sell_reason"].value_counts()
from freqtrade.data.btanalysis import analyze_trade_parallelism
# Analyze the above
parallel_trades = analyze_trade_parallelism(trades, '5m')
parallel_trades.plot()
from freqtrade.plot.plotting import generate_candlestick_graph
# Limit graph period to keep plotly quick and reactive
# Filter trades to one pair
trades_red = trades.loc[trades['pair'] == pair]
data_red = data['2019-06-01':'2019-06-10']
# Generate candlestick graph
graph = generate_candlestick_graph(pair=pair,
data=data_red,
trades=trades_red,
indicators1=['sma20', 'ema50', 'ema55'],
indicators2=['rsi', 'macd', 'macdsignal', 'macdhist']
)
# Show graph inline
# graph.show()
# Render graph in a seperate window
graph.show(renderer="browser")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and run strategy
Step2: Display the trade details
Step3: Load existing objects into a Jupyter notebook
Step4: Load live trading results into a pandas dataframe
Step5: Analyze the loaded trades for trade parallelism
Step6: Plot results
|
15,354 | <ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
model = FullyConnectedNet([100, 100, 100, 100], weight_scale=1e-2)
solver = Solver(model, data,
num_epochs=10, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 1e-3
},
print_every=100,
verbose=True)
solver.train()
best_model = model
################################################################################
# END OF YOUR CODE #
################################################################################
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fully-Connected Neural Nets
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
Step9: Loss layers
Step10: Two-layer network
Step11: Solver
Step12: Multilayer network
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Step20: Test you model
|
15,355 | <ASSISTANT_TASK:>
Python Code:
#导入需要的程序包
import pandas as pd
import seaborn as sns
# 获取600196.XSHG的2015年01月的日级数据, 只获取open+close字段
df = get_price('600196.XSHG', start_date='2014-01-01', end_date='2015-01-31', frequency='daily', fields=['open','close'])
returns = df.pct_change().dropna()
sns.distplot(returns.iloc[:,0:1])
pingan = get_price('000001.XSHE', start_date='2014-01-01', end_date='2015-02-01', frequency='daily', fields=['open'])
fuxing = get_price('600196.XSHG', start_date='2014-01-01', end_date='2015-02-01', frequency='daily', fields=['open'])
returns_pingan = pingan.pct_change().dropna()
returns_fuxing = fuxing.pct_change().dropna()
sns.jointplot(returns_pingan['open'], returns_fuxing['open'], kind='reg', size=12)
df = get_price(get_industry_stocks('A01'), fields=('close',))['close']
df
df1 = df.iloc[:,0:5]
df1
returns = df1.pct_change().dropna()
sns.distplot(returns.iloc[:,0:1])
sns.violinplot(returns,size=24)
sns.pairplot(returns, diag_kind='kde', size=2.4)
sns.heatmap(returns.corr())
sns.clustermap(returns.corr())
sns.clustermap?
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 股票收益曲线示例
Step2: 股票相关性分析
Step3: 从以上各图,可以分析两只股票之间是否存在一个可能的投资机会,如果散点均匀分布在直线周围,表明两个股票有较强的相关性。
Step4: 使用pct_change()计算收益率,使用dropna去除缺失值,使用distplot画出直方分布图。
Step5: 如果有多组数据需要通过统计分布图进行比较,可以使用violinplot得到小提琴图。
Step6: 使用pairplot画出散点图,通过散点图分析两两之间的关系
Step7: 更多相关性分析方法
Step8: 还可以通过clustermap画出聚类图,将相近的聚成一类;您可以通过查阅机器学习相关书籍了解更多过关于聚类的信息。
|
15,356 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
from operator import itemgetter
import math
%matplotlib inline
f= open('recs2009_public.csv','r')
datanames = np.genfromtxt(f,delimiter=',', names=True,dtype=None)
data1 = np.genfromtxt('recs2009_public.csv',delimiter=',', skip_header=1)
midatlantic = datanames[np.where(datanames['DIVISION']==2)]
# print midatlantic[0]
print midatlantic.shape
wesouthcen = datanames[np.where(datanames['DIVISION']==7)]
# wesouthcen[0]
print wesouthcen.shape
plt.plot(midatlantic['TOTALBTU'], 'rd')
plt.plot(wesouthcen['TOTALBTU'], 'bd')
plt.hist(midatlantic['TOTALBTU'],bins=100)
plt.plot(newdata['TOTALBTUSPH'],newdata['TOTALDOLSPH'], 'rd')
plt.xlabel('Space Heating Energy consumption (BTU)')
plt.ylabel('Total cost for space heating ($)')
xi = np.arange(0,1328)
A = np.array([ xi, np.ones(1328)])
# linearly generated sequence
y = midatlantic['TOTALBTU']
# obtaining the parameters
w = np.linalg.lstsq(A.T,y)[0]
xa = np.arange(0,1328,5)
y = y[0:-1:5]
# plotting the regression line
line = w[0]*xa+w[1]
plt.plot(xa,line,'ro',xa,y)
plt.title('Linear least squares fit line')
plt.ylabel('Total energy usage (BTU)')
plt.show()
print "Average value of energy consumption (BTU):"
print np.average(y)
names = np.genfromtxt('public_layout.csv', delimiter=',',skip_header=1,dtype=None,usecols=[1])
print names
np.corrcoef(midatlantic['WINDOWS'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['TOTSQFT_EN'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['TEMPHOME'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['NWEIGHT'],midatlantic['TOTALBTU'])[1,0]
years = lambda d : ((dt.datetime.now()).year - d)
yearsold = np.array(list(map(years, midatlantic['YEARMADE'])))
midatlantic['YEARMADE']
print yearsold
np.corrcoef(midatlantic['YEARMADE'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['TOTROOMS'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['NHSLDMEM'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['MONEYPY'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['STORIES'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['WASHTEMP'],midatlantic['TOTALBTU'])[1,0]
data1_ma = data1[(np.where(data1[:,2]==2))]
def bestcorrelation(X):
vector = np.zeros((len(X.T), 2))
for i in range(len(X.T)):
vector[i,0] = int(i)
vector[i,1] = np.corrcoef(X[:,i],X[:,907])[1,0]
return vector
v = bestcorrelation(data1_ma)
plt.plot(v[:,1])
highcorr = v[(np.where(v[:,1]>=0.47))]
print "Variable with correlation values greater than 0.53: "
print highcorr
fig = plt.figure(1)
fig.set_size_inches(15, 4)
ax1 = fig.add_subplot(1,3,1)
ax1.plot((data[:,0]),(data[:,3]),'ro')
ax1.set_title("Total sqft")
ax1.set_ylabel("Energy consumption (BTU)")
ax2 = fig.add_subplot(1,3,2)
ax2.plot((data[:,1]),(data[:,3]),'bo')
ax2.set_title("Total rooms")
ax2.set_ylabel("Energy consumption (BTU)")
ax3 = fig.add_subplot(1,3,3)
ax3.plot((data[:,2]),(data[:,3]),'ro')
ax3.set_title("Total windows")
ax3.set_ylabel("Energy consumption (BTU)")
plt.show()
def designmatrix(var1, var2, var3):
designmatrix = np.vstack((var1, var2, var3))
designmatrix = designmatrix.T
return designmatrix
def beta_hat(X,Y):
dotp = np.dot(X.T,X)
Ainv = np.linalg.inv(dotp)
final = np.dot(Ainv,X.T)
final = np.dot(final,Y)
return final
def R2(X,Y,beta_hat):
m2 = Y-np.dot(X,beta_hat)
m1 = m2.T
y_avg =np.mean(Y)
n2 = Y - y_avg
n1 = n2.T
R2_value = 1 - ((np.dot(m1,m2))/(np.dot(n1,n2)))
return R2_value
R2_max = 0
for k in range(150000,400000,10000):
newdata = midatlantic[np.where(midatlantic['TOTALBTU']<k)]
data = newdata['TOTSQFT_EN'],newdata['TOTROOMS'],newdata['WINDOWS'],newdata['TOTALBTU']
data = np.transpose(data)
data_sorted = sorted(data, key=itemgetter(1))
#Divide
data = data[0:-1]
data_train = data[::2]
data_test = data[1::2]
#Train dataset
area_train = data_train[:,0]
rooms_train = data_train[:,1]
windows_train = data_train[:,2]
btu_train = data_train[:,3]
dmx1 = designmatrix(area_train,rooms_train,windows_train)
beta_hat1 = beta_hat(dmx1,btu_train)
#Test dataset
area_test = data_test[:,0]
rooms_test = data_test[:,1]
windows_test = data_test[:,2]
btu_test = data_test[:,3]
dmx2 = designmatrix(area_test,rooms_test,windows_test)
btu_pre = np.dot(dmx2,beta_hat1)
R2_val = R2(dmx2,btu_test,beta_hat1)
plt.plot(k,R2_val,'ro')
plt.title('Distribution of R2 values')
plt.xlabel('Cutoff values of outlier (k)')
plt.ylabel('R2 value')
if R2_max < R2_val:
R2_max = R2_val
k_max = k
else:
R2_max = R2_max
k_max = k_max
print "Maximum value of R2: ",R2_max
print "At k value (k_max): ",k_max
btu_test.shape
newdata = midatlantic[np.where(midatlantic['TOTALBTU']<k_max)]
data = newdata['TOTSQFT_EN'],newdata['TOTROOMS'],newdata['WINDOWS'],newdata['TOTALBTU']
data = np.transpose(data)
# Data is sorted on number of total rooms
data_sorted = sorted(data, key=itemgetter(1))
# Divide alternative values are taken henceforth for train and test dataset
data_sorted = np.array(data_sorted[0:-1])
data_train1 = np.array(data_sorted[::2])
data_test1 = np.array(data_sorted[1::2])
data_sorted
def validation(data_train,data_test):
#Train dataset
btu_train = data_train[:,3]
dmx1 = designmatrix(data_train[:,0],data_train[:,1],data_train[:,2])
beta_hat1 = beta_hat(dmx1,btu_train)
#Test dataset
btu_test = data_test[:,3]
dmx2 = designmatrix(data_test[:,0],data_test[:,1],data_test[:,2])
btu_pre = np.dot(dmx2,beta_hat1)
R2_val = R2(dmx2,btu_test,beta_hat1)
print "R2 value is: ",R2_val
plt.plot(data_test[:,0],btu_test,'.b')
plt.plot(data_test[:,0],btu_pre,'.r')
plt.legend(['Actual data','Predicted data'])
plt.title('Validation of model')
print "Beta matrix:",beta_hat1
return (beta_hat1, R2_val)
beta1, R2_1 = validation(data_train1,data_test1)
print np.mean(data_test[:,0])
print np.mean(data_train[:,0])
print np.mean(data_test[:,1])
print np.mean(data_train[:,1])
print data_sorted
first = np.array(data_sorted[::3])
second = np.array(data_sorted[1::3])
third = np.array(data_sorted[2::3])
print "First dataset[0]:",first[0]
print "Second dataset[0]:",second[0]
print "Third dataset[0]:",third[0]
data_train2 = np.vstack((first,second))
data_test2 = np.array(third)
print "Second split of datasets"
print data_train2.shape
print data_test2.shape
data_train3 = np.vstack((first,third))
data_test3 = np.array(second)
print "Third split of datasets"
print data_train3.shape
print data_test3.shape
data_train4 = np.vstack((third,second))
data_test4 = np.array(first)
print "Fourth split of datasets"
print data_train4.shape
print data_test4.shape
beta2, R2_2 = validation(data_train2,data_test2)
beta3, R2_3 = validation(data_train3,data_test3)
beta4, R2_4 = validation(data_train4,data_test4)
l = [R2_1,R2_2,R2_3,R2_4]
R2_avg = np.mean(l)
print "Mean R2 value: ",R2_avg
beta_avg = np.mean([beta1,beta2,beta3,beta4],axis=0)
print "Mean Beta_hat matrix: ",beta_avg
# calculating error matrix: (Y-XB)
btu_test = data_test1[:,3]
dmx2 = designmatrix(data_test1[:,0],data_test1[:,1],data_test1[:,2])
error = btu_test - np.dot(dmx2,beta_avg)
# defining N for the number of data points in the test dataset
N = error.size
# defining the number of co-efficients in the beta_hat matrix
p = beta_avg.size
X = dmx2
print "N=",N
print "p=",p
#squaring of error matrix is calculated by multiplying by its transpose
errormatrix = (np.dot(error,error.T))/(N-p-1)
# print "Standard mean error:",errormatrix
s_var = errormatrix*(np.linalg.inv(np.dot(X.T,X)))
# print s_var
import math
sqrt = lambda d: (math.sqrt(d))
s_dev = map(sqrt,np.diag(s_var))
# s_dev
from scipy.stats import t
T_val = t.isf((1-0.95)/2,(N-p-1))
max_val = beta_avg + np.dot(T_val,s_dev)
min_val = beta_avg - np.dot(T_val,s_dev)
print "Base value: "+str(np.round(beta_avg, decimals=1))
print "Maximum value: "+str(np.round(max_val, decimals=1))
print "Minimum value: "+str(np.round(min_val, decimals=1))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the RECS dataset into the memory.
Step2: Preliminary analysis of dataset
Step3: 'TOTALBTU' column represents the total energy consumption including electricity and other fuels like natural gas. Each regional dataset is plotted to observe the individual trends and to get a comparative picture.
Step4: The individual trends are similar and show an almost linear horizontal line.
Step5: Space heating energy consumption is analyzed against the dollar cost for space heating use to observe the correlation and check if it can be used for regression modeling.
Step6: Plotting a linear least squares fit line.
Step7: The least square fit line is observed to be almost horizontal suggesting uniform distribution of the data across the mean value of 104,896 BTU.
Step8: Different variables are checked for their correlation value with the total energy consumption(TOTALBTU) based on manual understanding of the variables as shown below.
Step9: Result
Step10: Multivariable regression modeling for midatlantic residential energy consumption
Step11: Base function for making designmatrix, beta_hat and R2 coefficents are defined for multi-variable regression modeling.
Step12: To remove the outliers, 'k' is defined as the cutoff above which the data will be trimmed. A 'for' loop is run below to optimize the 'k' value to obtain the maximum value of the R2 coefficient.
Step13: Using the results from above, the final dataset is created after removing the outliers having a value below k_max
Step14: Split the final dataset into train and test data
Step15: Validation
Step16: Mean of one variable is compared for both test and train dataset to check for significant difference between them.
Step17: Cross-validation
Step18: Three pairs of train and test datasets are created for cross validation purpose using the three datasets.
Step19: Final Result
Step20: Calculate uncertainties using 95% confidence intervals corresponding to t-distribution
|
15,357 | <ASSISTANT_TASK:>
Python Code:
def sinusoid(t, amp, period, phase):
A generic sinusoidal curve. 'period' and 't' should have
the same units (e.g., days), and phase should be in
radians.
Parameters
----------
t : array_like
Array of times.
amp : numeric
Amplitude of the sinusoid.
period : numeric
Period of the sinusoid.
phase : numeric
Phase of the sinusoid.
return amp*np.sin(2*np.pi*t/period + phase)
def light_curve_model(p, t):
Our model for the variable star light curve will be a
pure sinusoid plus some constant offset (the mean
magnitude of the star). The function takes a single array
of parameters, p, and an array of times, t. By structuring
the function input parameters this way, we can use this
function to both generate and later fit the data.
Parameters
----------
p : iterable
A list, tuple, or array of model parameter values. For
example, a tuple of (amplitude, period, phase, mean mag.).
t : array_like
Array of times.
amp, period, phase, const = p
return sinusoid(t, amp, period, phase) + const
ndata_points = 32 # number of data points
outlier_fraction = 0.1 # 10% of the points will be outliers
true_amplitude = 1.5 # mag
true_period = 112. # days
true_phase = 1.5 # radians
true_mean_mag = 14. # mag
# pack the true parameters into a single tuple
true_params = (true_amplitude, true_period, true_phase, true_mean_mag)
# generate an array of observation times
time = np.random.uniform(0., 365., size=ndata_points)
time.sort()
# generate magnitude values from the model at the observation times
mag = light_curve_model(true_params, time)
# each data point will have a different uncertainty, sampled from
# a uniform distribution between 0.2 and 0.4 magnitudes
mag_err = np.random.uniform(0.2, 0.4, size=ndata_points)
# pick outlier points based on the set outlier_fraction. we generate a
# boolean array (array of True's and False's) -- when a given index
# is True, that point will become an outlier
outlier_idx = np.random.uniform(size=ndata_points) < outlier_fraction
# for the outlier points, add large scatter
mag[outlier_idx] += np.random.normal(0., 5., size=sum(outlier_idx))
# for the non-outlier points, add scatter based on the uncertainty array (mag_err).
# the twiddle (~) means 'logical not' - (True becomes False, False becomes True)
mag[~outlier_idx] += np.random.normal(0., mag_err[~outlier_idx])
plt.figure(figsize=(12,4))
plt.errorbar(time, mag, mag_err, marker='o',
linestyle='none', ecolor='#aaaaaa')
plt.xlim(0,365)
plt.xlabel("Time [day]")
plt.ylabel("Magnitude")
def lnprior(p):
amp,period,phase,const = p
if amp < 1 or amp > 2:
return -np.inf
if period < 10 or period > 200:
return -np.inf
if phase < 0. or phase > 2*np.pi:
return -np.inf
if const < 12 or const > 16:
return -np.inf
return 0.
def lnlikelihood(p, t, data, err):
amp,period,phase,const = p
return -np.log(err) - 0.5*( (data - sinusoid(t, amp, period, phase) - const) / err )**2
def lnprob(p, t, data, err):
return lnprior(p) + np.sum(lnlikelihood(p, t, data, err))
ndim, nwalkers = 4, 32
p0 = np.zeros((nwalkers,ndim))
p0[:,0] = np.random.uniform(1, 2., size=nwalkers) # amp
p0[:,1] = np.random.uniform(10, 200., size=nwalkers) # period (days)
p0[:,2] = np.random.uniform(0., 2*np.pi, size=nwalkers) # phase (radians)
p0[:,3] = np.random.uniform(12., 16., size=nwalkers) # const. offset (mag)
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob,
args=(time,mag,mag_err))
pos, prob, state = sampler.run_mcmc(p0, 1000)
best_pos = sampler.flatchain[sampler.flatlnprobability.argmax()]
pos = emcee.utils.sample_ball(best_pos, best_pos/100., size=nwalkers)
sampler.reset()
pos, prob, state = sampler.run_mcmc(pos, 100)
sampler.reset()
pos, prob, state = sampler.run_mcmc(pos, 1000)
param_labels = ["Amp.", "Period [day]", "Phase [rad]", "Mean mag."]
extents = [(0.5*truth,1.5*truth) for truth in true_params]
fig = triangle.corner(sampler.flatchain,
labels=param_labels,
truths=true_params,
range=extents)
nsamples = sampler.flatchain.shape[0]
plt.figure(figsize=(12,4))
plt.errorbar(time, mag, mag_err, marker='o',
linestyle='none', ecolor='#aaaaaa')
t = np.linspace(min(time), max(time), 1000)
for ii in range(10):
idx = np.random.randint(0, nsamples)
params = sampler.flatchain[idx]
model_mag = light_curve_model(params, t)
plt.plot(t, model_mag,
marker=None, color='#f03b20', alpha=0.4)
plt.xlim(min(time), max(time))
def lnprior(p):
amp = p[0]
period = p[1]
phase = p[2]
const = p[3]
fout = p[4]
if amp < 1 or amp > 2:
return -np.inf
if period < 100 or period > 200:
return -np.inf
if phase < 0. or phase > 2*np.pi:
return -np.inf
if const < 12 or const > 16:
return -np.inf
if fout > 1. or fout < 0.:
return -np.inf
return 0.
def ln_model_likelihood(p, t, data, err):
amp, period, phase, const, outlier_prob = p
term = -np.log(err) - 0.5*( (data - sinusoid(t, amp, period, phase) - const) / err )**2
return term
def ln_outlier_likelihood(p, t, data, err):
amp, period, phase, const, outlier_prob = p
outlier_err = 10.*np.median(err)
term = -np.log(outlier_err) - 0.5*( (data - sinusoid(t, amp, period, phase) - const) / outlier_err )**2
return term
def lnlikelihood(p, t, data, err):
amp, period, phase, const, fout = p
term1 = ln_model_likelihood(p, t, data, err)
term2 = ln_outlier_likelihood(p, t, data, err)
b = np.ones((2,len(t)))
b[0] = 1. - fout
b[1] = fout
return logsumexp(np.vstack((term1, term2)), b=b, axis=0)
def lnprob(p, t, data, err):
prior = lnprior(p)
if np.isinf(prior):
return -np.inf
return prior + np.sum(lnlikelihood(p, t, data, err))
ndim, nwalkers = 5, 64
p0 = np.zeros((nwalkers,ndim))
p0[:,0] = np.random.uniform(1, 2., size=nwalkers) # amp
p0[:,1] = np.random.uniform(100, 200., size=nwalkers) # period (days)
p0[:,2] = np.random.uniform(0., 2*np.pi, size=nwalkers) # phase (radians)
p0[:,3] = np.random.uniform(12., 16., size=nwalkers) # const. offset (mag)
p0[:,4] = np.random.normal(0.5, 0.05, size=(nwalkers)) # outlier probabilty
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob,
args=(time,mag,mag_err))
pos, prob, state = sampler.run_mcmc(p0, 1000)
best_pos = sampler.flatlnprobability.argmax()
new_pos = emcee.utils.sample_ball(sampler.flatchain[best_pos],
sampler.flatchain[best_pos]/100,
size=nwalkers)
sampler.reset()
pos, prob, state = sampler.run_mcmc(new_pos, 1000)
extents = [(0.5*truth,1.5*truth) for truth in true_params] + [(0,1)]
fig = triangle.corner(sampler.flatchain[:,:],
labels=param_labels + [""],
truths=list(true_params) + [0.1],
range=extents,
plot_datapoints=False)
nsamples = sampler.flatchain.shape[0]
plt.figure(figsize=(12,4))
plt.errorbar(time, mag, mag_err, marker='o',
linestyle='none', ecolor='#aaaaaa')
t = np.linspace(min(time), max(time), 1000)
for ii in range(10):
idx = np.random.randint(0, nsamples)
params = sampler.flatchain[idx,:4]
model_mag = light_curve_model(params, t)
plt.plot(t, model_mag,
marker=None, color='#f03b20', alpha=0.4)
plt.xlim(min(time), max(time))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Fitting a model to data with outliers using MCMC
Step3: Now we'll actually generate the fake data. The cell below contains tunable parameters -- if you want to change the number of data points, the fraction of points that are outliers, or the true model parameters, modify the variables in this cell (you'll have to download the full version of this notebook)
Step4:
Step5: Now we're armed with some fake data in the form of 3 arrays
Step6: Interesting -- we almost nailed the mean magnitude, but period, phase, and amplitude are biased (by many percent).
Step7: What can we do about this?
|
15,358 | <ASSISTANT_TASK:>
Python Code:
M = 600000
T = 0.6
z = 0.10
p0 = 400000
t0, t1, t2, t3 = 0.0, 1.0, 3.0, 6.0
wn = 2*np.pi/T
wd = wn*np.sqrt(1-z**2)
dt = 0.05
edt = np.exp(-z*wn*dt)
fac = dt/(2*M*wd)
t = dt*np.arange(1+int(t3/dt))
p = np.where(t<=t1, p0*(t-t0)/(t1-t0), np.where(t<t2, p0*(1-(t-t1)/(t2-t1)), 0))
s = np.sin(wd*t)
c = np.cos(wd*t)
sp = s*p
cp = c*p
plt.plot(t, p/1000)
plt.xlabel('Time/s')
plt.ylabel('Force/kN')
plt.xlim((t0,t3))
plt.grid();
A, B, x = 0, 0, [0]
for i, _ in enumerate(t[1:], 1):
A = A*edt+fac*(cp[i-1]*edt+cp[i])
B = B*edt+fac*(sp[i-1]*edt+sp[i])
x.append(A*s[i]-B*c[i])
x = np.array(x)
k = M*wn**2
Dst = p/k
plt.plot(t, x*1000)
plt.plot(t, Dst*1000)
plt.xlabel('Time/s')
plt.ylabel('Deflection/mm')
plt.xlim((t0,t3))
plt.grid()
plt.show();
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Natural Frequency, Damped Frequency
Step2: Computation
Step3: We initialize a time variable
Step4: We compute the load, the sines and the cosines of $\omega_D t$ and their products
Step5: The main (and only) loop in our code, we initialize A, B and a container for saving the deflections x,
Step6: It is necessary to plot the response.
|
15,359 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Javascript,display
from corticalmapping.ipython_lizard.html_widgets import raw_code_toggle
raw_code_toggle()
display(Javascript(var nb = IPython.notebook;
//var is_code_cell = (nb.get_selected_cell().cell_type == 'code')
//var curr_idx = (nb.get_selected_index() == 3);
nb.select(3);
nb.execute_cell();
))
from IPython.display import Javascript
from corticalmapping.ipython_lizard.ipython_filedialog import IPythonTkinterFileDialog
initial_dir = r"C:"
tkinter_file_dialog = IPythonTkinterFileDialog(initial_dir)
tkinter_file_dialog.execute_below = True
tkinter_file_dialog.show()
import os
from PyQt4 import QtGui,QtCore
import matplotlib.pyplot as plt
import matplotlib as mpl
from warnings import warn
#mpl.rcParams['figure.figsize'] = 10, 10
from corticalmapping import ipython_lizard
from corticalmapping.ipython_lizard.wrapped_retinotopic_mapping import WrappedRetinotopicMapping
from corticalmapping.ipython_lizard.patchplot_ipywidgets import PatchPlotWidgets
from corticalmapping.ipython_lizard.html_widgets import getSignMapWidget,getRawPatchMapWidget,getRawPatchesWidget, \
splitPatchesWidget,mergePatchesWidget,getEccentricityMapWidget, \
saveFinalResultWidget,submitAndRunBelowButton
%matplotlib inline
#%load_ext autoreload
#%autoreload 2
pkl_path = tkinter_file_dialog.file_path
TEST_PKL_IDX = 0
TEST_PATH = ipython_lizard.TEST_PKLS[TEST_PKL_IDX] #there are like 6 different test pkls in this iterable
current_dir = os.getcwd()
adj_pkl_paths = [os.path.join(current_dir,f) for f in os.listdir(current_dir) if f.endswith("pkl")]
if adj_pkl_paths:
adj_pkl_path = adj_pkl_paths[0]
else:
adj_pkl_path = None
pkls = zip(["MANUAL","ADJACENT","TEST"],[pkl_path,adj_pkl_path,TEST_PATH])
for p_type,pkl in pkls:
try:
trial = WrappedRetinotopicMapping.load_from_pkl(pkl)
print "Successfully loaded from: {0}, {1}".format(p_type,pkl)
__pkl_path = pkl
break
except Exception as e:
#warn(str(e))
warn("Failed to load from: {0}, {1}".format(p_type,pkl))
phaseMapFilterSigma = 1.0
signMapFilterSigma = 9.0
getSignMapWidget(trial,
phaseMapFilterSigmaDefault=phaseMapFilterSigma,
signMapFilterSigmaDefault=signMapFilterSigma,
)
submitAndRunBelowButton()
signMapThr = 0.35
openIter = 3
closeIter = 3
getRawPatchMapWidget(trial,
signMapThrDefault=signMapThr,
openIterDefault=openIter,
closeIterDefault=closeIter,
)
submitAndRunBelowButton()
dilationIter = 15
borderWidth = 1
smallPatchThr = 100
getRawPatchesWidget(trial,
dilationIterDefault=dilationIter,
borderWidthDefault=borderWidth,
smallPatchThrDefault=smallPatchThr,
)
submitAndRunBelowButton()
trial.getDeterminantMap()
eccMapFilterSigma = 10.0
getEccentricityMapWidget(trial,eccMapFilterSigmaDefault=eccMapFilterSigma)
submitAndRunBelowButton()
visualSpacePixelSize = 0.5
visualSpaceCloseIter = 15
splitLocalMinCutStep = 5.0
splitOverlapThr = 1.2
splitPatchesWidget(trial,
visualSpacePixelSizeDefault=visualSpacePixelSize,
visualSpaceCloseIterDefault=visualSpaceCloseIter,
splitLocalMinCutStepDefault=splitLocalMinCutStep,
splitOverlapThrDefault=splitOverlapThr
)
submitAndRunBelowButton()
mergeOverlapThr = 0.1
mergePatchesWidget(trial,mergeOverlapThrDefault=mergeOverlapThr)
submitAndRunBelowButton()
patchplot_widgets = PatchPlotWidgets(trial,{},[],figsize=(5,5))
patchplot_widgets.plot_reference_img()
rename_patches_dict = dict(trial.finalPatches)
DESIRED_PATCH_NAMES = ['A','AL','AM','LI','LLA','LM','M','MMA','MMP','P','PM','POR','RL','RLL','RS','S1','V1']
for patch in rename_patches_dict.keys(): #replace 'patch01' with 01, etc
rename_patches_dict[patch.replace("patch","")] = rename_patches_dict.pop(patch)
patchplot_widgets = PatchPlotWidgets(trial,rename_patches_dict,DESIRED_PATCH_NAMES,figsize=(12,6))
patchplot_widgets.show()
submitAndRunBelowButton()
rename_patches_dict = patchplot_widgets.patches_dict
finalPatchBorder_figure = trial.plotFinalPatchBorders(rename_patches_dict,borderWidth=4)
pkl_save_path = None
#saveTrialDictPkl(trial,pkl_save_path)
saveFinalResultWidget(trial,finalPatchBorder_figure,__pkl_path,pkl_save_path)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div style="background
Step2: <div style="background
Step3: <div style="background
Step4: <div style="background
Step5: <div style="background
Step6: <div style="background
Step7: <div style="background
Step8: <div style="background
Step9: <div style="background
Step10: <div style="background
Step11: <div style="background
|
15,360 | <ASSISTANT_TASK:>
Python Code:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
])
vectorizer.vocabulary_
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
"The quick brown fox jumps over the lazy dog.",
])
vectorizer.vocabulary_
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
import os
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
test_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'test')
from sklearn.datasets import load_files
train = load_files(container_path=(train_path),
categories=['pos', 'neg'])
test = load_files(container_path=(test_path),
categories=['pos', 'neg'])
train.keys()
import numpy as np
for label, data in zip(('ENTRENAMIENTO', 'TEST'), (train, test)):
print('\n\n%s' % label)
print('Número de documentos:', len(data['data']))
print('\n1er documento:\n', data['data'][0])
print('\n1era etiqueta:', data['target'][0])
print('\nNombre de las clases:', data['target_names'])
print('Conteo de las clases:',
np.unique(data['target']), ' -> ',
np.bincount(data['target']))
from sklearn.utils.murmurhash import murmurhash3_bytes_u32
# Codificado para compatibilidad con Python 3
for word in "the cat sat on the mat".encode("utf-8").split():
print("{0} => {1}".format(
word, murmurhash3_bytes_u32(word, 0) % 2 ** 20))
from sklearn.feature_extraction.text import HashingVectorizer
h_vectorizer = HashingVectorizer(encoding='latin-1')
h_vectorizer
analyzer = h_vectorizer.build_analyzer()
analyzer('Esta es una frase de prueba.')
docs_train, y_train = train['data'], train['target']
docs_valid, y_valid = test['data'][:12500], test['target'][:12500]
docs_test, y_test = test['data'][12500:], test['target'][12500:]
h_vectorizer.transform(docs_train)
h_vec = HashingVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 h_vec.fit(docs_train, y_train)
count_vec = CountVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 count_vec.fit(docs_train, y_train)
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
h_pipeline = Pipeline([
('vec', HashingVectorizer(encoding='latin-1')),
('clf', LogisticRegression(random_state=1)),
])
h_pipeline.fit(docs_train, y_train)
print('Accuracy de entrenamiento', h_pipeline.score(docs_train, y_train))
print('Accuracy de validación', h_pipeline.score(docs_valid, y_valid))
import gc
del count_vec
del h_pipeline
gc.collect()
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
train_pos = os.path.join(train_path, 'pos')
train_neg = os.path.join(train_path, 'neg')
fnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\
[os.path.join(train_neg, f) for f in os.listdir(train_neg)]
fnames[:3]
y_train = np.zeros((len(fnames), ), dtype=int)
y_train[:12500] = 1
np.bincount(y_train)
from sklearn.base import clone
def batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1):
vec = HashingVectorizer(encoding='latin-1')
idx = np.arange(labels.shape[0])
c_clf = clone(clf)
rng = np.random.RandomState(seed=random_seed)
for i in range(iterations):
rnd_idx = rng.choice(idx, size=batchsize)
documents = []
for i in rnd_idx:
with open(fnames[i], 'r') as f:
documents.append(f.read())
X_batch = vec.transform(documents)
batch_labels = labels[rnd_idx]
c_clf.partial_fit(X=X_batch,
y=batch_labels,
classes=[0, 1])
return c_clf
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(loss='log', random_state=1)
sgd = batch_train(clf=sgd,
fnames=fnames,
labels=y_train)
vec = HashingVectorizer(encoding='latin-1')
sgd.score(vec.transform(docs_test), y_test)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: El vocabulario se utiliza en la fase transform para construir la matriz de ocurrencias
Step2: Vamos a realizar un nuevo fit con un corpus algo más grande
Step3: El atributo vocabulary_ crece (en escala logarítmica) con respecto al tamaño del conjunto de entrenamiento. Observa que no podemos construir los vocabularios en paralelo para cada documento de texto ya que hay algunas palabras que son comunes y necesitaríamos alguna estructura compartida o barrera de sincronización (aumentando la complejidad de implementar el entrenamiento, sobre todo si queremos distribuirlo en un cluster).
Step4: El dataset de películas IMDb
Step5: Ahora, vamos a cargarlos en nuestra sesión activa usando la función load_files de scikit-learn
Step6: <div class="alert alert-warning">
Step7: En particular, solo estamos interesados en los arrays data y target.
Step8: Como puedes comprobar, el array 'target' consiste en valores 0 y 1, donde el 0 es una revisión negativa y el 1 representa una positiva.
Step9: La conversión no tiene estado y la dimensionalidad del espacio de salida se fija a priori (aquí usamos módulo 2 ** 20, que significa aproximadamente que tenemos un millón de dimensiones, $2^{20}$). Esto hace posible evitar las limitaciones del vectorizador de vocabulario, tanto a nivel de paralelización como de poder aplicar aprendizaje online.
Step10: Comparte la misma estructura de preprocesamiento, generación de tokens y análisis
Step11: Podemos vectorizar nuestros datasets en matriz dispersa de scipy de la misma forma que hubiéramos hecho con CountVectorizer o TfidfVectorizer, excepto que podemos llamar directamente al método transform. No hay necesidad de llamar a fit porque el HashingVectorizer no se entrena, las transformaciones están prefijadas.
Step12: La dimensión de salida se fija de antemano a n_features=2 ** 20 (valor por defecto) para minimizar la probabilidad de colisión en la mayoría de problemas de clasificación (1M de pesos en el atributo coef_)
Step13: Ahora vamos a comparar la eficiencia computacional de HashingVectorizer con respecto a CountVectorizer
Step14: Como puedes observar, HashingVectorizer es mucho más rápido que Countvectorizer.
Step15: Aprendizaje Out-of-Core
Step16: Ahora vamos a crear el array de etiquetas
Step17: Ahora vamos a implementar la función batch_train function
Step18: Ahora vamos a utilizar la clase un SGDClassifier con un coste logístico en lugar de LogisticRegression. SGD proviene de stochastic gradient descent, un algoritmo de optimización que optimiza los pesos de forma iterativa ejemplo a ejemplo, lo que nos permite pasarle los datos en grupos.
Step19: Al terminar, evaluemos el rendimiento
|
15,361 | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../..')
from bestPy import RecoBasedOn
from bestPy.datastructures import Transactions
file = '../tests/data/data50.csv' # Enter the path to and name of your data file here!
data = Transactions.from_csv(file)
customer = '7' # Specify the ID of the customer you want to make recommendations for here!
recommendation = RecoBasedOn(data)
top_five = recommendation.for_one(customer)
for article in top_five:
print(article)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Minimal imports
Step2: Read data
Step3: Recommend articles
|
15,362 | <ASSISTANT_TASK:>
Python Code:
x, dx = np.linspace(-2, 2, 1000, retstep=True)
S = np.where(np.abs(x) < 0.5, 1, 0)
Sm = [S]
for i in range(1, 5):
Si = np.convolve(Sm[-1], Sm[0], mode='same').astype(float)
Si_value = integrate.trapz(Si, x)
Si /= Si_value
Sm.append(Si)
fig, ax = plt.subplots()
# ax.vlines([-1.5, -0.5, 0.5, 1.5], 0, 1, lw=4, alpha=0.3)
ax.plot(x, S, label="$S_0$")
height = 1.05
for xleft in [-1.5, +0.5]:
rekt = patches.Rectangle((xleft, 0), 1, height, fill=True, facecolor='0.85')
ax.add_patch(rekt)
for i in range(1, 5):
ax.plot(x, Sm[i], label=f"$S_{i}$")
ax.set_title("Funkcje kształtu")
ax.set_xlabel("Numer komórki")
ax.set_xticks([-1, 0, 1, 2])
ax.set_ylabel("Objętość makrocząstki")
ax.set_xlim(-2, 2)
ax.set_ylim(0,height)
ax.grid()
ax.legend(loc='upper right')
fig.savefig("/home/dominik/Inzynierka/ThesisText/Images/shapefunctions.eps")
fig, ax = plt.subplots()
# ax.vlines([-1.5, -0.5, 0.5, 1.5], 0, 1, lw=4, alpha=0.3)
# ax.plot(x, S, label="$S_0$")
height = 1.05
for xleft in [-1.5, +0.5]:
rekt = patches.Rectangle((xleft, 0), 1, height, fill=True, facecolor='0.85')
ax.add_patch(rekt)
# for i in range(1, 5):
# ax.plot(x, Sm[i], label=f"$S_{i}$")
# ax.set_title("")
ax.set_xlabel("Numer komórki")
ax.set_xticks([-1, 0, 1, 2])
ax.set_ylabel("Objętość makrocząstki")
ax.set_xlim(-2, 2)
ax.set_ylim(0,height)
# ax.grid()
right, left = -0.499, -1.499
for x_left in [left, right]:
points = [[x_left, 0], [x_left + 2, 0], [x_left + 1, 1]]
triangle = patches.Polygon(points, edgecolor='k')
ax.add_patch(triangle)
ax.arrow(left + 1, 0.5, abs(right-left), 0, head_width=0.05, head_length = 0.1, fc='k')
# ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1, fc='k', ec='k')
# ax.legend(loc='upper right')
fig.savefig("/home/dominik/Inzynierka/ThesisText/Images/deposition-movement.eps")
fig, ax = plt.subplots()
# ax.vlines([-1.5, -0.5, 0.5, 1.5], 0, 1, lw=4, alpha=0.3)
# ax.plot(x, S, label="$S_0$")
height = 1.05
for xleft in [-1.5, +0.5]:
rekt = patches.Rectangle((xleft, 0), 1, height, fill=True, facecolor='0.85')
ax.add_patch(rekt)
ax.set_xlabel("Numer komórki")
ax.set_xticks([-1, 0, 1, 2])
ax.set_ylabel("Objętość makrocząstki")
ax.set_xlim(-2, 2)
ax.set_ylim(0,height)
x_left = -0.4
def height(x, x1):
return abs(x-x1)#/(x1 - 1))
x1 = -0.3
points_left = [[x1, 0], [0.5, 0], [0.5, height(0.5, x1)]]
points_right = [[0.5, height(0.5, x1)], [x1+1, 1], [x1+2, 0], [0.5, 0],]
# points_right_2 = [[x1+1, 0], [x1+2, 0], [x1+1, 1]]
for points, color in zip([points_left, points_right], ['b', 'r']):
triangle = patches.Polygon(points, edgecolor='k', fill=True, facecolor=color)
ax.add_patch(triangle)
# ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1, fc='k', ec='k')
# ax.legend(loc='upper right')
fig.savefig("/home/dominik/Inzynierka/ThesisText/Images/charge-deposition.eps")
for i, S in enumerate(Sm):
fit = np.polyfit(x, S, i)
print(fit)
def Sk(k, m, dx):
sinarg = 0.5 * k * dx
return np.sinc(sinarg)**(2*(m+1))
freq = np.fft.fftshift(np.fft.fftfreq(len(x), dx))
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(14, 6))
for i, S in enumerate(Sm):
Sfft = np.fft.fftshift(np.abs(np.fft.fft(S)/len(x)))
ax1.plot(freq, Sfft, "o-", label=f"$S_{i}(k)$")
ax1.set_xlim(-5, 5)
ax2.plot(freq, Sk(freq, i+1, dx)/len(x), label=f"$S_{i}(k)$")
ax2.set_xlim(-5, 5)
ax1.legend()
ax2.legend()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Higher order shape functions result from the (renormalized to 1) iterated convolution of the top hat shape function with itself
Step2: They're piecewise polynomials, so simply doing the following won't work
Step3: In Fourier Space
|
15,363 | <ASSISTANT_TASK:>
Python Code:
PROJECT = <YOUR PROJECT>
BUCKET = <YOUR BUCKET>
REGION = <YOUR REGION>
TFVERSION = "2.1" # TF version for AI Platform to use
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
!ls -la taxifare/trainer
%%writefile ./taxifare/trainer/model.py
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
eval_data_path = hparams['eval_data_path']
nnsize = hparams['nnsize']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'export/savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
dnn_model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(dnn_model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = dnn_model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(dnn_model, model_export_path)
# TODO 1
hp_metric = # TODO: Your code goes here
# TODO 1
hpt = # TODO: Your code goes here
# TODO: Your code goes here
return history
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
nargs = "+",
type = int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
model.train_and_evaluate(hparams)
%%writefile hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: # TODO: Your code goes here
maxParallelTrials: # TODO: Your code goes here
hyperparameterMetricTag: # TODO: Your code goes here
enableTrialEarlyStopping: True
params:
- parameterName: lr
# TODO: Your code goes here
- parameterName: nbuckets
# TODO: Your code goes here
- parameterName: batch_size
# TODO: Your code goes here
!python3 -m pip install cloudml-hypertune
%%bash
EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*
TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*
OUTPUT_DIR=./taxifare-model
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python3 -m trainer.task \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTPUT_DIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size 5 \
--num_examples_to_train_on 100 \
--num_evals 1 \
--nbuckets 10 \
--lr 0.001 \
--nnsize 32 8
%%bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID
REGION="us-central1"
TFVERSION="2.1"
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO
gcloud ai-platform jobs submit training $JOBID \
# TODO: Your code goes here
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Move code into python package
Step2: To use hyperparameter tuning in your training job you must perform the following steps
Step3: Modify task.py
Step4: Create config.yaml file
Step5: Report your hyperparameter metric to AI Platform Training
|
15,364 | <ASSISTANT_TASK:>
Python Code:
from pybotics.robot import Robot
from pybotics.predefined_models import ur10
nominal_robot = Robot.from_parameters(ur10())
import pandas as pd
def display_robot_kinematics(robot: Robot):
df = pd.DataFrame(robot.kinematic_chain.matrix)
df.columns = ["alpha", "a", "theta", "d"]
display(df)
display_robot_kinematics(nominal_robot)
import numpy as np
from copy import deepcopy
real_robot = deepcopy(nominal_robot)
# let's pretend our real robot has small joint offsets
# in real life, this would be a joint mastering issue (level-1 calibration)
# https://en.wikipedia.org/wiki/Robot_calibration
for link in real_robot.kinematic_chain.links:
link.theta += np.random.uniform(
low=np.deg2rad(-0.1),
high=np.deg2rad(0.1)
)
display_robot_kinematics(real_robot)
joints = []
positions = []
for i in range(1000):
q = real_robot.random_joints()
pose = real_robot.fk(q)
joints.append(q)
positions.append(pose[:-1,-1])
pd.DataFrame(joints).describe()
pd.DataFrame(positions, columns=['x','y','z']).describe()
from sklearn.model_selection import train_test_split
split = train_test_split(joints, positions, test_size=0.3)
train_joints = split[0]
test_joints = split[1]
train_positions = split[2]
test_positions = split[3]
from pybotics.optimization import compute_absolute_errors
nominal_errors = compute_absolute_errors(
qs=test_joints,
positions=test_positions,
robot=nominal_robot
)
display(pd.Series(nominal_errors).describe())
from pybotics.optimization import OptimizationHandler
# init calibration handler
handler = OptimizationHandler(nominal_robot)
# set handler to solve for theta parameters
kc_mask_matrix = np.zeros_like(nominal_robot.kinematic_chain.matrix, dtype=bool)
kc_mask_matrix[:,2] = True
display(kc_mask_matrix)
handler.kinematic_chain_mask = kc_mask_matrix.ravel()
from scipy.optimize import least_squares
from pybotics.optimization import optimize_accuracy
# run optimization
result = least_squares(
fun=optimize_accuracy,
x0=handler.generate_optimization_vector(),
args=(handler, train_joints, train_positions),
verbose=2
) # type: scipy.optimize.OptimizeResult
calibrated_robot = handler.robot
calibrated_errors = compute_absolute_errors(
qs=test_joints,
positions=test_positions,
robot=calibrated_robot
)
display(pd.Series(calibrated_errors).describe())
import matplotlib.pyplot as plt
%matplotlib inline
plt.xscale("log")
plt.hist(nominal_errors, color="C0", label="Nominal")
plt.hist(calibrated_errors, color="C1", label="Calibrated")
plt.legend()
plt.xlabel("Absolute Error [mm]")
plt.ylabel("Frequency")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Real Robots
Step2: Get Real (aka Measured) Poses
Step3: Split Calibration and Validation Measures
Step4: Get Nominal Position Errors
Step5: Calibration
Step6: Results
|
15,365 | <ASSISTANT_TASK:>
Python Code:
import csv
sports = [] # This is a python "list" data structure (it is "mutable")
# The file has a list of sports, one per line.
# There are spaces in some names, but no commas or weird punctuation
with open('../data/SportsDataset_ListOfSports.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for index, row in enumerate( myreader ):
sports.append(' '.join(row) ) # the join() call merges all fields
# Make a look-up table: if you input the name of the sport, it tells you the index
# Also, print out a list of all the sports, to make sure it looks OK
Sport2Index = {}
for ind, sprt in enumerate( sports ):
Sport2Index[sprt] = ind
print('Sport #', ind,'is',sprt)
# And example usage of the index lookup:
#print('The sport "', sports[7],'" has 0-based index', Sport2Index[sports[7]])
# -- And read in the list of questions --
# this csv file has only a single row
questions = []
with open('../data/SportsDataset_ListOfAttributes.csv','r') as csvfile:
myreader = csv.reader( csvfile )
for row in myreader:
questions = row
Question2Index = {}
for ind, quest in enumerate( questions ):
Question2Index[quest] = ind
#print('Question #', ind,': ',quest)
# And example usage of the index lookup:
#print('The question "', questions[10],'" has 0-based index', Question2Index[questions[10]])
# -- And read in the training data --
YesNoDict = { "Yes": 1, "No": -1, "Unsure": 0, "": 0 }
# Load from the csv file.
# Note: the file only has "1"s, because blanks mean "No"
X = []
with open('../data/SportsDataset_DataAttributes.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
data = [];
for col in row:
data.append( col or "-1")
X.append( list(map(int,data)) ) # integers, not strings
# This data file is listed in the same order as the sports
# The variable "y" contains the index of the sport
y = range(len(sports)) # this doesn't work
y = list( map(int,y) ) # Instead, we need to ask python to really enumerate it!
from sklearn import tree
from sklearn.ensemble import RandomForestClassifier
#clf = tree.DecisionTreeClassifier(max_depth=8,min_samples_leaf=2)
clf = tree.DecisionTreeClassifier(max_depth=13,min_samples_leaf=1)
clf.fit(X,y)
# Try changing the training data, so that we don't get 100% accuracy:
#X2 = X.copy()
#X2[15][-1] = -1
#clf.fit(X2,y)
# -- Visualize the decision tree --
import graphviz
dot_data = tree.export_graphviz( clf, out_file='sportsTree.dot', feature_names = questions,impurity=False,
class_names = sports,filled=True, rounded=True,label=None,
proportion=True)
# export to out_file = 'sportsTree.dot', then in vim, use `%s/\\n\[.*\]\\n/\\n/g` to remove labels
#graph = graphviz.Source( dot_data )
#graph.render('sportsTree')
#graph
from IPython.display import Image
Image(url='sportsTree.png')
# let's see how well we do
# You can also use clf.score(X,y)
def correctPercentage( predictions, actual ):
correct = 0
for i,guess in enumerate(predictions):
if guess == actual[i]:
correct = correct + 1
return correct/len(predictions)
clf2 = RandomForestClassifier(max_depth=10,n_estimators=10)
clf2 = clf2.fit(X,y)
print(correctPercentage( clf.predict(X), y ))
print(correctPercentage( clf2.predict(X), y ))
clf.score(X,y)
# cross validate (hard to do, due to small amount of data)
clf3 = tree.DecisionTreeClassifier(random_state=0,max_depth=8)
from sklearn.model_selection import cross_val_score
cross_val_score(clf3, X, y)#, cv=2)
len(X)
tree_ = clf.tree_
from sklearn.tree import _tree
import numpy as np
#dir(_tree.Tree) # inspect what we have to work with
#dir(_tree)
def parseInput(str):
# first, ignore capitalization
str=str.lower()
if str[0] == 'y':
return 1
elif str[0] == 'n':
return -1
else:
return 0
def askQuestion(node=0):
Q = tree_.feature[node]
threshold = tree_.threshold[node]
if Q == _tree.TREE_UNDEFINED or Q == _tree.TREE_LEAF:
# at a leaf node, so make the prediction
vals = tree_.value[node][0] # size of all movies
ind = np.argmax( vals )
print('GUESS: ', sports[ind] )
else:
# ask a question and recurse
print(questions[Q])
ans = parseInput(input(" [Yes/no/unsure] "))
if ans <= threshold:
askQuestion(tree_.children_left[node])
else:
askQuestion(tree_.children_right[node])
# or maybe ask for all 13 questions
def fullSport():
x = [0]*len(questions)
for i,Q in enumerate( questions ):
print(Q)
x[i] = parseInput(input(" [Yes/no/unsure] "))
return x
# Play game!
askQuestion()
# Or get all 13 unique questions on one movie, and try random forests
x = fullSport()
print('PREDICTION (random forests): ', sports[ clf2.predict([x])[0] ] )
print('PREDICTION (decision tree ): ', sports[ clf.predict([x])[0] ] )
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Actually classify (here, we depart from public script)
Step2: Now, start the real hacking, so that it let's us interact
|
15,366 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.float32)
])
def f(x):
return tf.cosh(x)
# Evaluate the tf.function
result = f(tf.constant([0.0]))
print (f"result = {result}")
# Convert the tf.function
converter = tf.lite.TFLiteConverter.from_concrete_functions(
[f.get_concrete_function()], f)
try:
fb_model = converter.convert()
except Exception as e:
print(f"Got an exception: {e}")
@tf.lite.experimental.authoring.compatible
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.float32)
])
def f(x):
return tf.cosh(x)
# Evaluate the tf.function
result = f(tf.constant([0.0]))
print (f"result = {result}")
compatibility_log = '\n'.join(f.get_compatibility_log())
print (f"compatibility_log = {compatibility_log}")
@tf.lite.experimental.authoring.compatible(raise_exception=True)
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.float32)
])
def f(x):
return tf.cosh(x)
# Evaluate the tf.function
try:
result = f(tf.constant([0.0]))
print (f"result = {result}")
except Exception as e:
print(f"Got an exception: {e}")
target_spec = tf.lite.TargetSpec()
target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
@tf.lite.experimental.authoring.compatible(converter_target_spec=target_spec, raise_exception=True)
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.float32)
])
def f(x):
return tf.cosh(x)
# Evaluate the tf.function
result = f(tf.constant([0.0]))
print (f"result = {result}")
target_spec = tf.lite.TargetSpec()
target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
target_spec.experimental_supported_backends = ["GPU"]
@tf.lite.experimental.authoring.compatible(converter_target_spec=target_spec)
@tf.function(input_signature=[
tf.TensorSpec(shape=[4, 4], dtype=tf.float32)
])
def func(x):
y = tf.cosh(x)
return y + tf.slice(x, [1, 1], [1, 1])
result = func(tf.ones(shape=(4,4), dtype=tf.float32))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFLite Authoring Tool
Step2: TensorFlow to TensorFlow Lite compatibility issue
Step3: Simple Target Aware Authoring usage
Step4: If any TensorFlow Lite compatibility issue is found, it will show COMPATIBILITY WARNING or COMPATIBILITY ERROR with the exact location of the problematic op. In this example, it shows the location of tf.Cosh op in your tf.function model.
Step5: Raise an exception for an incompatibility
Step6: Specifying "Select TF ops" usage
Step7: Checking GPU compatibility
|
15,367 | <ASSISTANT_TASK:>
Python Code:
parameters = ho.read_session_data(alias, 'parameters')
parameters.head(5)
trial_phases = ho.read_session_data(alias, 'trial_phases')
trial_phases.head(5)
trials = ho.read_session_data(alias, 'trials')
trials.head(5)
screen_dims = ho.screen_dimensions_during_trial(alias=alias,trial_nr=0)
pix_per_degrees = screen_dims[0] / 40.0 # the screen @ this distance is 40 degrees wide.
coherent_times = np.array(trial_phases[trial_phases['trial_phase_index'] == 2]['trial_phase_EL_timestamp'])
incoherent_times = np.array(trial_phases[trial_phases['trial_phase_index'] == 3]['trial_phase_EL_timestamp'])
gaze_during_coherence = [
np.array(
ho.data_from_time_period(alias=alias, columns=['L_gaze_x_int','L_gaze_y_int'], time_period=(tp[0]-250,tp[1]+250))
) for tp in zip(coherent_times, incoherent_times)]
# just a handy function to make sure everything is scaled the same way.
def zscore(x):
return (x-x.mean()) / x.std()
# smoothing width
smw = 250
s2 = int(smw/2)
# which trials to plot
which_trials = np.random.choice(np.arange(len(parameters)), size=10, replace=False)
f = pl.figure(figsize = (16,24))
for i, trial in enumerate(which_trials):
s = f.add_subplot(len(which_trials),1,i+1)
gd = zscore(gaze_during_coherence[trial][:2000,0])
gd -= gd[250]
pl.plot(gd, 'k', label='gaze position')
pl.plot(np.diff(gd), 'r', label='velocity', alpha=0.5)
pl.plot(np.nan_to_num(rolling_mean(gd, smw))[s2:], 'k', label='gaze position, smooth')
pl.plot(zscore(np.nan_to_num(rolling_mean(np.diff(gd), smw)))[s2:], 'b', label='velocity, smooth', alpha=0.5)
s.axvline(x=parameters.coherent_dots_duration[trial]+s2, c='b', lw=3)
s.set_title('trial ' + str(trial) + ' answer: ' + str(parameters.answer[trial]))
s.set_ylim([-5,5])
sn.despine(ax=s, offset=10)
pl.legend()
pl.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, dividing everything up into trials.
Step2: Again, plot some single trials to see what's up.
|
15,368 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
# Use the Colab's preinstalled TensorFlow 2.x
%tensorflow_version 2.x
except:
pass
!pip install fastavro
!pip install tensorflow-io==0.9.0
!pip install google-cloud-bigquery-storage
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
PROJECT_ID = "<YOUR PROJECT>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
%env GCLOUD_PROJECT=$PROJECT_ID
from __future__ import absolute_import, division, print_function, unicode_literals
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from google.api_core.exceptions import GoogleAPIError
LOCATION = 'us'
# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine/census/data'
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
DATASET_ID = 'census_dataset'
TRAINING_TABLE_ID = 'census_training_table'
EVAL_TABLE_ID = 'census_eval_table'
CSV_SCHEMA = [
bigquery.SchemaField("age", "FLOAT64"),
bigquery.SchemaField("workclass", "STRING"),
bigquery.SchemaField("fnlwgt", "FLOAT64"),
bigquery.SchemaField("education", "STRING"),
bigquery.SchemaField("education_num", "FLOAT64"),
bigquery.SchemaField("marital_status", "STRING"),
bigquery.SchemaField("occupation", "STRING"),
bigquery.SchemaField("relationship", "STRING"),
bigquery.SchemaField("race", "STRING"),
bigquery.SchemaField("gender", "STRING"),
bigquery.SchemaField("capital_gain", "FLOAT64"),
bigquery.SchemaField("capital_loss", "FLOAT64"),
bigquery.SchemaField("hours_per_week", "FLOAT64"),
bigquery.SchemaField("native_country", "STRING"),
bigquery.SchemaField("income_bracket", "STRING"),
]
UNUSED_COLUMNS = ["fnlwgt", "education_num"]
def create_bigquery_dataset_if_necessary(dataset_id):
# Construct a full Dataset object to send to the API.
client = bigquery.Client(project=PROJECT_ID)
dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id))
dataset.location = LOCATION
try:
dataset = client.create_dataset(dataset) # API request
return True
except GoogleAPIError as err:
if err.code != 409: # http_client.CONFLICT
raise
return False
def load_data_into_bigquery(url, table_id):
create_bigquery_dataset_if_necessary(DATASET_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.source_format = bigquery.SourceFormat.CSV
job_config.schema = CSV_SCHEMA
load_job = client.load_table_from_uri(
url, table_ref, job_config=job_config
)
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID)
load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
%%bigquery --use_bqstorage_api
SELECT * FROM `<YOUR PROJECT>.census_dataset.census_training_table` LIMIT 5
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def transofrom_row(row_dict):
# Trim all string tensors
trimmed_dict = { column:
(tf.strings.strip(tensor) if tensor.dtype == 'string' else tensor)
for (column,tensor) in row_dict.items()
}
# Extract feature column
income_bracket = trimmed_dict.pop('income_bracket')
# Convert feature column to 0.0/1.0
income_bracket_float = tf.cond(tf.equal(tf.strings.strip(income_bracket), '>50K'),
lambda: tf.constant(1.0),
lambda: tf.constant(0.0))
return (trimmed_dict, income_bracket_float)
def read_bigquery(table_name):
tensorflow_io_bigquery_client = BigQueryClient()
read_session = tensorflow_io_bigquery_client.read_session(
"projects/" + PROJECT_ID,
PROJECT_ID, table_name, DATASET_ID,
list(field.name for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
list(dtypes.double if field.field_type == 'FLOAT64'
else dtypes.string for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
requested_streams=2)
dataset = read_session.parallel_read_rows()
transformed_ds = dataset.map (transofrom_row)
return transformed_ds
BATCH_SIZE = 32
training_ds = read_bigquery(TRAINING_TABLE_ID).shuffle(10000).batch(BATCH_SIZE)
eval_ds = read_bigquery(EVAL_TABLE_ID).batch(BATCH_SIZE)
def get_categorical_feature_values(column):
query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
job_config = bigquery.QueryJobConfig()
query_job = client.query(query, job_config=job_config)
result = query_job.to_dataframe()
return result.values[:,0]
from tensorflow import feature_column
feature_columns = []
# numeric cols
for header in ['capital_gain', 'capital_loss', 'hours_per_week']:
feature_columns.append(feature_column.numeric_column(header))
# categorical cols
for header in ['workclass', 'marital_status', 'occupation', 'relationship',
'race', 'native_country', 'education']:
categorical_feature = feature_column.categorical_column_with_vocabulary_list(
header, get_categorical_feature_values(header))
categorical_feature_one_hot = feature_column.indicator_column(categorical_feature)
feature_columns.append(categorical_feature_one_hot)
# bucketized cols
age = feature_column.numeric_column('age')
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
feature_layer,
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Compile Keras model
model.compile(
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(training_ds, epochs=5)
loss, accuracy = model.evaluate(eval_ds)
print("Accuracy", accuracy)
sample_x = {
'age' : np.array([56, 36]),
'workclass': np.array(['Local-gov', 'Private']),
'education': np.array(['Bachelors', 'Bachelors']),
'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']),
'occupation': np.array(['Tech-support', 'Other-service']),
'relationship': np.array(['Husband', 'Husband']),
'race': np.array(['White', 'Black']),
'gender': np.array(['Male', 'Male']),
'capital_gain': np.array([0, 7298]),
'capital_loss': np.array([0, 0]),
'hours_per_week': np.array([40, 36]),
'native_country': np.array(['United-States', 'United-States'])
}
model.predict(sample_x)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BigQuery TensorFlow 리더의 엔드 투 엔드 예제
Step2: 인증합니다.
Step3: 프로젝트 ID를 설정합니다.
Step4: Python 라이브러리를 가져오고 상수를 정의합니다.
Step5: BigQuery로 인구 조사 데이터 가져오기
Step6: BigQuery에서 인구 조사 데이터를 로드합니다.
Step7: 가져온 데이터를 확인합니다.
Step8: BigQuery 리더를 사용하여 TensorFlow DataSet에 인구 조사 데이터 로드하기
Step9: 특성 열 정의하기
Step10: 모델 빌드 및 훈련하기
Step11: 모델을 훈련합니다.
Step12: 모델 평가하기
Step13: 몇 가지 무작위 샘플을 평가합니다.
|
15,369 | <ASSISTANT_TASK:>
Python Code:
from ipywidgets import interact, interact_manual
import ipywidgets
from matplotlib import pyplot as plt
%matplotlib inline
import cv2
import numpy as np
import math
%run 'Set-up.ipynb'
%run 'Loading scenes.ipynb'
loadSceneRelativeToClient('../scenes/Baxter_demo.ttt')
from pyrep.vrep.vrep import simxGetObjectOrientation, simxGetObjectHandle, simxGetFloatSignal
rclass='Baxter_base'
print('Loading class: {}'.format(rclass))
class Baxter_base:
def __init__(self, api: VRep):
self._api = api
self._joint1 = api.joint.with_position_control("Baxter_leftArm_joint4")
#self._sensor_ultrasonic_left = api.sensor.proximity("Pioneer_p3dx_ultrasonicSensor3")
#self._sensor_ultrasonic_right = api.sensor.proximity("Pioneer_p3dx_ultrasonicSensor6")
res, self._handle = simxGetObjectHandle(self.id, 'Baxter', vrep.simx_opmode_oneshot_wait)
self.joints= self._joints()
self.sensors= self._joints()
self.handles = self._introspect()
self.names_by_handles = {self.handles[k]:k for k in self.handles}
def _get_handle(self,name):
res, handle=vrep.simxGetObjectHandle(self.id,
name,
vrep.simx_opmode_blocking)
return handle
def _introspect(self):
#http://galvanicloop.com/blog/post/7/quadruped-robot-5-simulation-on-v-rep
errorCode, handles, intData, \
floatData, array = vrep.simxGetObjectGroupData(self.id,
vrep.sim_appobj_object_type,
0,
vrep.simx_opmode_oneshot_wait)
return dict(zip(array, handles))
def _joints(self):
j = self._introspect()
#Add arm joints
joints={k: j[k] for k in j if 'joint' in k}
#Add monitor joint
joints['Baxter_monitorJoint']=j['Baxter_monitorJoint']
return joints
def _sensors(self):
s = self._introspect()
return {k: j[k] for k in s if 'ensor' in k}
def get_joint_angle(self,jointname, degrees=False):
handle=self.joints[jointname]
res, pos = vrep.simxGetJointPosition(self.id,
handle,
vrep.simx_opmode_blocking)
if degrees:
pos = pos * 180 / math.pi
return pos
def joint_angles(self, degrees=False):
ja = {}
for j in sorted(self.joints):
pos = self.get_joint_angle(j,degrees=degrees)
ja[j] = pos
return ja
'''
def get_orientation(self):
#http://www.coppeliarobotics.com/helpFiles/en/remoteApiFunctions.htm#simxGetObjectOrientation
#Returns a value between +/-pi
return simxGetObjectOrientation(self.id, self._handle, -1, v.simx_opmode_streaming)[1]
'''
print('This is a base class for the {} model\n'.format(eval(rclass).__name__ ))
rclass='Baxter'
print('Loading class: {}'.format(rclass))
class Baxter(Baxter_base):
def __init__(self, api: VRep):
self._api = api
self.id = api._id
tmp1,tmp2=self.get_coords_left_tip(True),self.get_coords_right_tip(True)
#Inherit init settings from parent class
super(Baxter, self).__init__(api)
def set_joint_angle(self, joint_name, angle):
''' Set the joint angle of a joint referred to by joint name '''
#The joint angle is set by reference to the joint handle
#Look-up the joint handle from the joint name
handle=self.joints[joint_name]
#res,handle = vrep.simxGetObjectHandle(self.id,'Baxter_rightArm_joint4',vrep.simx_opmode_oneshot_wait);
#Set the joint angle
vrep.simxSetJointTargetPosition(self.id,
handle,
angle,
vrep.simx_opmode_oneshot);
def _get_coords_tip(self,arm,init=False):
#simx_opmode_streaming (the first call) thence simx_opmode_buffer
handle=self._get_handle('Baxter_{}Arm_tip'.format(arm))
if init: mode= vrep.simx_opmode_buffer
else: mode =vrep.simx_opmode_streaming
res,pos=vrep.simxGetObjectPosition(self.id,handle,
-1, mode)
return pos
def get_coords_left_tip(self, init=False):
return self._get_coords_tip('left', init)
def get_coords_right_tip(self,init=False):
return self._get_coords_tip('right', init)
def get_vision_sensor_image(self, vision_sensor_name):
#http://www.forum.coppeliarobotics.com/viewtopic.php?f=9&t=7012&p=27786
res, v1 = vrep.simxGetObjectHandle(self.id, vision_sensor_name, vrep.simx_opmode_oneshot_wait)
err, resolution, image = vrep.simxGetVisionSensorImage(self.id, v1, 0, vrep.simx_opmode_streaming)
img=None
while err!=vrep.simx_return_ok:#(vrep.simxGetConnectionId(clientID) != -1):
err, resolution, image = vrep.simxGetVisionSensorImage(self.id, v1, 0, vrep.simx_opmode_buffer)
if err == vrep.simx_return_ok:
#print("image OK!!!")
img = np.array(image,dtype=np.uint8)
#
img.resize([resolution[1],resolution[0],3])
#For some reason the image is upside down unless we flip it?
img = cv2.flip(img,0)
ok=False
elif err == vrep.simx_return_novalue_flag:
#print("no image yet")
pass
else:
print(err)
return img
def get_ultrasonic_sensor_reading(self, ultrasonic_sensor_number):
if '{}'.format(ultrasonic_sensor_number).isdigit() and int(ultrasonic_sensor_number) > 0 and int(ultrasonic_sensor_number)<13:
ultrasonic_sensor_number=int(ultrasonic_sensor_number)
else: return "Not a valid input: expecting int in range 1..12"
handle = self._get_handle('Baxter_ultrasonic_sensor{}'.format(ultrasonic_sensor_number))
err, detectionState,detectedPoint,detectedObjectHandle,detectedSurfaceNormalVector=vrep.simxReadProximitySensor(self.id,handle,vrep.simx_opmode_streaming)
while err!=vrep.simx_return_ok:
err, detectionState,detectedPoint,detectedObjectHandle,detectedSurfaceNormalVector=vrep.simxReadProximitySensor(self.id,handle,vrep.simx_opmode_buffer)
if not detectionState:
return False
distance=math.sqrt(detectedPoint[0]*detectedPoint[0]+detectedPoint[1]*detectedPoint[1]+detectedPoint[2]*detectedPoint[2])
return distance, self.names_by_handles[detectedObjectHandle], detectedPoint, detectedSurfaceNormalVector
methods = [method for method in dir(eval(rclass)) if not method.startswith('_')]
print('Methods available in {}:\n\t{}\n'.format(eval(rclass).__name__ , '\n\t'.join(methods)))
from pyrep import VRep
from pyrep.vrep import vrep as vrep
#Ensure there are no outstanding simulations running
vrep.simxFinish(-1)
#Open connection to the simulator
api=VRep.connect("127.0.0.1", 19997)
#Start the simulation
api.simulation.start()
#Create a Python object to represent the simulated robot
r = Baxter(api)
plt.imshow( r.get_vision_sensor_image('Baxter_rightArm_camera') );
plt.imshow( r.get_vision_sensor_image('Baxter_leftArm_camera') );
r.joint_angles()
r.joint_angles(degrees=True)
def f(j, x):
r.set_joint_angle(j,x)
interact_manual(f, j=['Baxter_leftArm_joint1',
'Baxter_leftArm_joint2',
'Baxter_monitorJoint'],
x=(-2,2,0.2));
posDisplay = ipywidgets.Text()
for j in r.joints:
exec(
def {j}({js}):
r.set_joint_angle('{j}',{js})
posDisplay.value=','.join([str(x) for x in r.get_coords_left_tip()])+ ','.join([str(x) for x in r.get_coords_right_tip()])
interact({j}, {js}=(-3.5,3.5,0.2))
.format(j=j, js='_'.join(j.split('_')[1:]).replace('Arm_joint','')))
posDisplay
','.join([str(x) for x in r.get_coords_left_tip()])+ ','.join([str(x) for x in r.get_coords_right_tip()])
r.get_coords_left_tip(), r.get_coords_right_tip()
joints_range={}
for j in r.joints:
r.set_joint_angle(j,0)
for j in r.joints:
joint_min=999
joint_max=-999
joint_curr=0
print('Looking for max {}...'.format(j))
r.set_joint_angle(j,0)
if j.endswith('joint4'):
r.set_joint_angle(j.replace('4','2'),-1)
while True:
joint_curr=r.get_joint_angle(j,True)
if joint_curr>joint_max:
joint_max=joint_curr
r.set_joint_angle(j,joint_curr+0.1)
time.sleep(0.1)
else:
r.set_joint_angle(j,0)
break
print('Looking for min {}...'.format(j))
while True:
joint_curr=r.get_joint_angle(j,True)
if joint_curr<joint_min:
joint_min=joint_curr
r.set_joint_angle(j,joint_curr-0.1)
time.sleep(0.1)
else:
r.set_joint_angle(j,0)
break
joints_range[j]=(joint_min,joint_max)
joints_range
for j in joints_range:
min_joint,max_joint=joints_range[j]
print('{}: ({}, {})'.format(j, min_joint * 180 / math.pi,
max_joint * 180 / math.pi))
r.get_ultrasonic_sensor_reading(3)
#Stop the simulation
api.simulation.stop()
#Close the scene
err = vrep.simxCloseScene(api.simulation._id,vrep.simx_opmode_blocking)
if err == vrep.simx_return_ok: print('Scene closed ok...')
#Close the connection to the simulator
api.close_connection()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the desired scene, in this case, containing an instance of the Baxter robot.
Step2: Cameras
Step3: Grab a Snapshot from the Cameras
Step4: Controlling Baxter's Joints
Step5: We can also get the joint angles in degrees
Step6: The joint angles in the model can be set with reference to the joint angle name.
Step8: We can iterate through the joint names to create a widget for each one
Step9: Find Joint Angle Range
Step10: Activity
Step11: Ultrasound Sensors
Step12: Stop the Simulation and Close the Simulator Connection
|
15,370 | <ASSISTANT_TASK:>
Python Code:
am = Hero(1)
# You can use attributes to get some hero properties which depends on lvl, examples:
print('Anti-Mage stats on lvl {}'.format(am.lvl))
print('\tstrength = {}'.format(am.str))
print('\tagility = {}'.format(am.agi))
print('\tintellect = {}'.format(am.int))
print('\t...')
sf = Hero.from_name('Shadow Fiend')
sf.in_game_name
pprint(am.specs)
am.get_role()
print(am.get_description(include=['laning', 'role', 'name']))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Class attributes
Step2: Also you can get all the specs. This is the dictionary with a lot of parameters.
Step3: Getting info
Step4: If you want to combine few descriptions -- Hero.get_description() is the way to go.
|
15,371 | <ASSISTANT_TASK:>
Python Code:
import rebound
import reboundx
import numpy as np
sim = rebound.Simulation()
sim.G = 6.674e-11 # SI units
sim.dt = 1.e4 # Initial timestep in sec.
sim.N_active = 2 # Make it so dust particles don't interact with one another gravitationally
sim.add(m=1.99e30, hash="Sun") # add Sun with mass in kg
sim.add(m=5.68e26, a=1.43e12, e=0.056, pomega = 0., f=0., hash="Saturn") # Add Saturn at pericenter
ps = sim.particles
rebx = reboundx.Extras(sim)
rf = rebx.load_force("radiation_forces")
rebx.add_force(rf)
rf.params["c"] = 3.e8
ps["Sun"].params["radiation_source"] = 1
a = 1.3e10 # in meters
e = 0.16
inc = 175*np.pi/180.
Omega = 0. # longitude of node
omega = 0. # argument of pericenter
f = 0. # true anomaly
# Add two dust grains with the same orbit
sim.add(primary=ps["Saturn"], a=a, e=e, inc=inc, Omega=Omega, omega=omega, f=f, hash="p1")
sim.add(primary=ps["Saturn"], a=a, e=e, inc=inc, Omega=Omega, omega=omega, f=f, hash="p2")
ps["p1"].params["beta"] = 0.01
grain_radius = 1.e-5 # grain radius in m
density = 1000. # kg/m^3 = 1g/cc
Q_pr = 1.
luminosity = 3.85e26 # Watts
ps["p2"].params["beta"] = rebx.rad_calc_beta(sim.G, rf.params["c"], ps[0].m, luminosity, grain_radius, density, Q_pr)
print("Particle 2's beta parameter = {0}".format(ps["p2"].params["beta"]))
yr = 365*24*3600 # s
Noutput = 1000
times = np.linspace(0,100.*yr, Noutput)
e1, e2 = np.zeros(Noutput), np.zeros(Noutput)
sim.move_to_com() # move to center of mass frame first
for i, time in enumerate(times):
sim.integrate(time)
e1[i] = ps["p1"].calculate_orbit(primary=ps["Saturn"]).e
e2[i] = ps["p2"].calculate_orbit(primary=ps["Saturn"]).e
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(times/yr, e1, label=r"$\beta$={0:.1e}".format(ps["p1"].params["beta"]))
ax.plot(times/yr, e2, label=r"$\beta$={0:.1e}".format(ps["p2"].params["beta"]))
ax.set_xlabel('Time (yrs)', fontsize=24)
ax.set_ylabel('Eccentricity', fontsize=24)
plt.legend(fontsize=24)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's set up REBOUNDx and add radiation_forces. We also have to set the speed of light in the units we want to use.
Step2: By default, the radiation_forces effect assumes the particle at index 0 is the source of the radiation. If you'd like to use a different one, or it's possible that the radiation source might move to a different index (e.g. with a custom merger routine), you can add a radiation_source flag to the appropriate particle like this
Step3: Here we show how to add two dust grains to the simulation in different ways. Let's first initialize their orbits. In both cases we use the orbital elements of Saturn's irregular satellite Phoebe, which the dust grains will inherit upon release (Tamayo et al. 2011). Since the dust grains don't interact with one another, putting them on top of each other is OK.
Step4: Now we add the grains' physical properties. In order for particles to feel radiation forces, we have to set their beta parameter. $\beta$ is the ratio of the radiation force to the gravitational force from the star (Burns et al. 1979). One can either set it directly
Step5: or we can calculate it from more fundamental parameters. REBOUNDx has a convenience function that takes the gravitational constant, speed of light, radiation source's mass and luminosity, and then the grain's physical radius, bulk density, and radiation pressure coefficient Q_pr (Burns et al. 1979, equals 1 in the limit that the grain size is >> the radiation's wavelength).
Step6: Now let's run for 100 years (about 3 Saturn orbits), and look at how the eccentricity varies over a Saturn year
|
15,372 | <ASSISTANT_TASK:>
Python Code:
!pip install -q git+https://github.com/pyro-ppl/numpyro.git
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import numpyro
import numpyro.distributions as dist
from jax import lax, random
from jax.scipy.special import ndtr, ndtri
from jax.scipy.stats import poisson, norm
from numpyro.distributions import (
constraints,
Distribution,
FoldedDistribution,
SoftLaplace,
StudentT,
TruncatedDistribution,
TruncatedNormal,
)
from numpyro.distributions.util import promote_shapes
from numpyro.infer import DiscreteHMCGibbs, MCMC, NUTS, Predictive
from scipy.stats import poisson as sp_poisson
numpyro.enable_x64()
RNG = random.PRNGKey(0)
PRIOR_RNG, MCMC_RNG, PRED_RNG = random.split(RNG, 3)
MCMC_KWARGS = dict(
num_warmup=2000,
num_samples=2000,
num_chains=4,
chain_method="sequential",
)
def truncated_normal_model(num_observations, high, x=None):
loc = numpyro.sample("loc", dist.Normal())
scale = numpyro.sample("scale", dist.LogNormal())
with numpyro.plate("observations", num_observations):
numpyro.sample("x", TruncatedNormal(loc, scale, high=high), obs=x)
high = 1.2
num_observations = 250
num_prior_samples = 100
prior = Predictive(truncated_normal_model, num_samples=num_prior_samples)
prior_samples = prior(PRIOR_RNG, num_observations, high)
# -- select an arbitrary prior sample as true data
true_idx = 0
true_loc = prior_samples["loc"][true_idx]
true_scale = prior_samples["scale"][true_idx]
true_x = prior_samples["x"][true_idx]
plt.hist(true_x.copy(), bins=20)
plt.axvline(high, linestyle=":", color="k")
plt.xlabel("x")
plt.show()
# --- Run MCMC and check estimates and diagnostics
mcmc = MCMC(NUTS(truncated_normal_model), **MCMC_KWARGS)
mcmc.run(MCMC_RNG, num_observations, high, true_x)
mcmc.print_summary()
# --- Compare to ground truth
print(f"True loc : {true_loc:3.2}")
print(f"True scale: {true_scale:3.2}")
pred = Predictive(truncated_normal_model, posterior_samples=mcmc.get_samples())
pred_samples = pred(PRED_RNG, num_observations, high=float("inf"))
# thin the samples to not saturate matplotlib
samples_thinned = pred_samples["x"].ravel()[::1000]
f, axes = plt.subplots(1, 2, figsize=(15, 5), sharex=True)
axes[0].hist(
samples_thinned.copy(), label="Untruncated posterior", bins=20, density=True
)
axes[0].set_title("Untruncated posterior")
vals, bins, _ = axes[1].hist(
samples_thinned[samples_thinned < high].copy(),
label="Tail of untruncated posterior",
bins=10,
density=True,
)
axes[1].hist(
true_x.copy(), bins=bins, label="Observed, truncated data", density=True, alpha=0.5
)
axes[1].set_title("Comparison to observed data")
for ax in axes:
ax.axvline(high, linestyle=":", color="k", label="Truncation point")
ax.legend()
plt.show()
def TruncatedSoftLaplace(
loc=0.0, scale=1.0, *, low=None, high=None, validate_args=None
):
return TruncatedDistribution(
base_dist=SoftLaplace(loc, scale),
low=low,
high=high,
validate_args=validate_args,
)
def truncated_soft_laplace_model(num_observations, high, x=None):
loc = numpyro.sample("loc", dist.Normal())
scale = numpyro.sample("scale", dist.LogNormal())
with numpyro.plate("obs", num_observations):
numpyro.sample("x", TruncatedSoftLaplace(loc, scale, high=high), obs=x)
high = 2.3
num_observations = 200
num_prior_samples = 100
prior = Predictive(truncated_soft_laplace_model, num_samples=num_prior_samples)
prior_samples = prior(PRIOR_RNG, num_observations, high)
true_idx = 0
true_x = prior_samples["x"][true_idx]
true_loc = prior_samples["loc"][true_idx]
true_scale = prior_samples["scale"][true_idx]
mcmc = MCMC(
NUTS(truncated_soft_laplace_model),
**MCMC_KWARGS,
)
mcmc.run(
MCMC_RNG,
num_observations,
high,
true_x,
)
mcmc.print_summary()
print(f"True loc : {true_loc:3.2}")
print(f"True scale: {true_scale:3.2}")
def FoldedStudentT(df, loc=0.0, scale=1.0):
return FoldedDistribution(StudentT(df, loc=loc, scale=scale))
def folded_student_model(num_observations, x=None):
df = numpyro.sample("df", dist.Gamma(6, 2))
loc = numpyro.sample("loc", dist.Normal())
scale = numpyro.sample("scale", dist.LogNormal())
with numpyro.plate("obs", num_observations):
numpyro.sample("x", FoldedStudentT(df, loc, scale), obs=x)
# --- prior sampling
num_observations = 500
num_prior_samples = 100
prior = Predictive(folded_student_model, num_samples=num_prior_samples)
prior_samples = prior(PRIOR_RNG, num_observations)
# --- choose any prior sample as the ground truth
true_idx = 0
true_df = prior_samples["df"][true_idx]
true_loc = prior_samples["loc"][true_idx]
true_scale = prior_samples["scale"][true_idx]
true_x = prior_samples["x"][true_idx]
# --- do inference with MCMC
mcmc = MCMC(
NUTS(folded_student_model),
**MCMC_KWARGS,
)
mcmc.run(MCMC_RNG, num_observations, true_x)
# --- Check diagostics
mcmc.print_summary()
# --- Compare to ground truth:
print(f"True df : {true_df:3.2f}")
print(f"True loc : {true_loc:3.2f}")
print(f"True scale: {true_scale:3.2f}")
class _RightExtendedReal(constraints.Constraint):
Any number in the interval (-inf, inf].
def __call__(self, x):
return (x == x) & (x != float("-inf"))
def feasible_like(self, prototype):
return jnp.zeros_like(prototype)
right_extended_real = _RightExtendedReal()
class RightTruncatedNormal(Distribution):
A truncated Normal distribution.
:param numpy.ndarray loc: location parameter of the untruncated normal
:param numpy.ndarray scale: scale parameter of the untruncated normal
:param numpy.ndarray high: point at which the truncation happens
arg_constraints = {
"loc": constraints.real,
"scale": constraints.positive,
"high": right_extended_real,
}
reparametrized_params = ["loc", "scale", "high"]
def __init__(self, loc=0.0, scale=1.0, high=float("inf"), validate_args=True):
batch_shape = lax.broadcast_shapes(
jnp.shape(loc),
jnp.shape(scale),
jnp.shape(high),
)
self.loc, self.scale, self.high = promote_shapes(loc, scale, high)
super().__init__(batch_shape, validate_args=validate_args)
def log_prob(self, value):
log_m = norm.logcdf(self.high, self.loc, self.scale)
log_p = norm.logpdf(value, self.loc, self.scale)
return jnp.where(value < self.high, log_p - log_m, -jnp.inf)
def sample(self, key, sample_shape=()):
shape = sample_shape + self.batch_shape
minval = jnp.finfo(jnp.result_type(float)).tiny
u = random.uniform(key, shape, minval=minval)
return self.icdf(u)
def icdf(self, u):
m = norm.cdf(self.high, self.loc, self.scale)
return self.loc + self.scale * ndtri(m * u)
@constraints.dependent_property
def support(self):
return constraints.less_than(self.high)
def truncated_normal_model(num_observations, x=None):
loc = numpyro.sample("loc", dist.Normal())
scale = numpyro.sample("scale", dist.LogNormal())
high = numpyro.sample("high", dist.Normal())
with numpyro.plate("observations", num_observations):
numpyro.sample("x", RightTruncatedNormal(loc, scale, high), obs=x)
num_observations = 1000
num_prior_samples = 100
prior = Predictive(truncated_normal_model, num_samples=num_prior_samples)
prior_samples = prior(PRIOR_RNG, num_observations)
true_idx = 0
true_loc = prior_samples["loc"][true_idx]
true_scale = prior_samples["scale"][true_idx]
true_high = prior_samples["high"][true_idx]
true_x = prior_samples["x"][true_idx]
plt.hist(true_x.copy())
plt.axvline(true_high, linestyle=":", color="k")
plt.xlabel("x")
plt.show()
mcmc = MCMC(NUTS(truncated_normal_model), **MCMC_KWARGS)
mcmc.run(MCMC_RNG, num_observations, true_x)
mcmc.print_summary()
print(f"True high : {true_high:3.2f}")
print(f"True loc : {true_loc:3.2f}")
print(f"True scale: {true_scale:3.2f}")
def truncated_normal_model_2(num_observations, x=None):
loc = numpyro.sample("loc", dist.Normal())
scale = numpyro.sample("scale", dist.LogNormal())
if x is None:
high = numpyro.sample("high", dist.Normal())
else:
# high is greater or equal to the max value in x:
delta = numpyro.sample("delta", dist.HalfNormal())
high = numpyro.deterministic("high", delta + x.max())
with numpyro.plate("observations", num_observations):
numpyro.sample("x", RightTruncatedNormal(loc, scale, high), obs=x)
mcmc = MCMC(NUTS(truncated_normal_model_2), **MCMC_KWARGS)
mcmc.run(MCMC_RNG, num_observations, true_x)
mcmc.print_summary(exclude_deterministic=False)
model_without_truncation = numpyro.handlers.condition(
truncated_normal_model,
{"high": float("inf")},
)
estimates = mcmc.get_samples().copy()
estimates.pop("high") # Drop to make sure these are not used
pred = Predictive(
model_without_truncation,
posterior_samples=estimates,
)
pred_samples = pred(PRED_RNG, num_observations=1000)
# thin the samples for a faster histogram
samples_thinned = pred_samples["x"].ravel()[::1000]
f, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].hist(
samples_thinned.copy(), label="Untruncated posterior", bins=20, density=True
)
axes[0].axvline(true_high, linestyle=":", color="k", label="Truncation point")
axes[0].set_title("Untruncated posterior")
axes[0].legend()
axes[1].hist(
samples_thinned[samples_thinned < true_high].copy(),
label="Tail of untruncated posterior",
bins=20,
density=True,
)
axes[1].hist(true_x.copy(), label="Observed, truncated data", density=True, alpha=0.5)
axes[1].axvline(true_high, linestyle=":", color="k", label="Truncation point")
axes[1].set_title("Comparison to observed data")
axes[1].legend()
plt.show()
def scipy_truncated_poisson_icdf(args): # Note: all arguments are passed inside a tuple
rate, low, u = args
rate = np.asarray(rate)
low = np.asarray(low)
u = np.asarray(u)
density = sp_poisson(rate)
low_cdf = density.cdf(low - 1)
normalizer = 1.0 - low_cdf
x = normalizer * u + low_cdf
return density.ppf(x)
class LeftTruncatedPoisson(Distribution):
A truncated Poisson distribution.
:param numpy.ndarray low: lower bound at which truncation happens
:param numpy.ndarray rate: rate of the Poisson distribution.
arg_constraints = {
"low": constraints.nonnegative_integer,
"rate": constraints.positive,
}
def __init__(self, rate=1.0, low=0, validate_args=None):
batch_shape = lax.broadcast_shapes(jnp.shape(low), jnp.shape(rate))
self.low, self.rate = promote_shapes(low, rate)
super().__init__(batch_shape, validate_args=validate_args)
def log_prob(self, value):
m = 1 - poisson.cdf(self.low - 1, self.rate)
log_p = poisson.logpmf(value, self.rate)
return jnp.where(value >= self.low, log_p - jnp.log(m), -jnp.inf)
def sample(self, key, sample_shape=()):
shape = sample_shape + self.batch_shape
float_type = jnp.result_type(float)
minval = jnp.finfo(float_type).tiny
u = random.uniform(key, shape, minval=minval)
# return self.icdf(u) # Brute force
# return self.icdf_faster(u) # For faster sampling.
return self.icdf_scipy(u) # Using `host_callback`
def icdf(self, u):
def cond_fn(val):
n, cdf = val
return jnp.any(cdf < u)
def body_fn(val):
n, cdf = val
n_new = jnp.where(cdf < u, n + 1, n)
return n_new, self.cdf(n_new)
low = self.low * jnp.ones_like(u)
cdf = self.cdf(low)
n, _ = lax.while_loop(cond_fn, body_fn, (low, cdf))
return n.astype(jnp.result_type(int))
def icdf_faster(self, u):
num_bins = 200 # Choose a reasonably large value
bins = jnp.arange(num_bins)
cdf = self.cdf(bins)
indices = jnp.searchsorted(cdf, u)
return bins[indices]
def icdf_scipy(self, u):
result_shape = jax.ShapeDtypeStruct(u.shape, jnp.result_type(float))
result = jax.experimental.host_callback.call(
scipy_truncated_poisson_icdf,
(self.rate, self.low, u),
result_shape=result_shape,
)
return result.astype(jnp.result_type(int))
def cdf(self, value):
m = 1 - poisson.cdf(self.low - 1, self.rate)
f = poisson.cdf(value, self.rate) - poisson.cdf(self.low - 1, self.rate)
return jnp.where(value >= self.low, f / m, 0)
@constraints.dependent_property(is_discrete=True)
def support(self):
return constraints.integer_greater_than(self.low - 1)
def discrete_distplot(samples, ax=None, **kwargs):
Utility function for plotting the samples as a barplot.
x, y = np.unique(samples, return_counts=True)
y = y / sum(y)
if ax is None:
ax = plt.gca()
ax.bar(x, y, **kwargs)
return ax
def truncated_poisson_model(num_observations, x=None):
low = numpyro.sample("low", dist.Categorical(0.2 * jnp.ones((5,))))
rate = numpyro.sample("rate", dist.LogNormal(1, 1))
with numpyro.plate("observations", num_observations):
numpyro.sample("x", LeftTruncatedPoisson(rate, low), obs=x)
# -- prior samples
num_observations = 1000
num_prior_samples = 100
prior = Predictive(truncated_poisson_model, num_samples=num_prior_samples)
prior_samples = prior(PRIOR_RNG, num_observations)
def truncated_poisson_model(num_observations, x=None, k=5):
zeros = jnp.zeros((k,))
low = numpyro.sample("low", dist.Categorical(logits=zeros))
rate = numpyro.sample("rate", dist.LogNormal(1, 1))
with numpyro.plate("observations", num_observations):
numpyro.sample("x", LeftTruncatedPoisson(rate, low), obs=x)
# Take any prior sample as the true process.
true_idx = 6
true_low = prior_samples["low"][true_idx]
true_rate = prior_samples["rate"][true_idx]
true_x = prior_samples["x"][true_idx]
discrete_distplot(true_x.copy());
mcmc = MCMC(DiscreteHMCGibbs(NUTS(truncated_poisson_model)), **MCMC_KWARGS)
mcmc.run(MCMC_RNG, num_observations, true_x, k=true_x.min() + 1)
mcmc.print_summary()
true_rate
model_with_known_low = numpyro.handlers.condition(
truncated_poisson_model, {"low": true_low}
)
mcmc = MCMC(
NUTS(model_with_known_low),
**MCMC_KWARGS,
)
mcmc.run(MCMC_RNG, num_observations, true_x)
mcmc.print_summary()
model_without_truncation = numpyro.handlers.condition(
truncated_poisson_model,
{"low": 0},
)
pred = Predictive(model_without_truncation, posterior_samples=mcmc.get_samples())
pred_samples = pred(PRED_RNG, num_observations)
thinned_samples = pred_samples["x"][::500]
discrete_distplot(thinned_samples.copy());
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. What are truncated distributions?
Step2: Let's now check that we can use this model in a typical MCMC workflow.
Step3: Inference
Step4: Removing the truncation
Step5: Let's finally plot these samples and compare them to the original, observed data.
Step6: The plot on the left shows data simulated from the posterior distribution with the truncation removed, so we are able to see how the data would look like if it were not truncated. To sense check this, we discard the simulated samples that are above the truncation point and make histogram of those and compare it to a histogram of the true data (right plot).
Step7: And, as before, we check that we can use this model in the steps of a typical workflow
Step8: Important
Step9: And we check that we can use our distribution in a typical workflow
Step12: 5. Building your own truncated distribution <a class="anchor" id="5"></a>
Step14: Let's try it out!
Step15: As before, we run mcmc against some synthetic data.
Step16: Run MCMC and check the estimates
Step17: Compare estimates against the ground truth
Step18: Note that, even though we can recover good estimates for the true values,
Step19: And the divergences are gone.
Step21: 5.3 Example
Step23: Let's try it out!
Step24: Prior samples
Step25: Inference
Step26: To do inference, we set k = x.min() + 1. Note also the use of DiscreteHMCGibbs
Step27: As before, one needs to be extra careful when estimating the truncation point.
Step28: And note we can use NUTS directly because there's no need to infer any discrete parameters.
Step29: Removing the truncation
|
15,373 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter([i for idx, row in reviews.iterrows() for i in row[0].split(' ')])
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {}
for index, word in enumerate(vocab):
word2idx[word] = index
def text_to_vector(text):
word_vectors = np.zeros(len(vocab), dtype=np.int)
for word in text.split(" "):
idx = word2idx.get(word, None)
if idx == None:
continue
else:
word_vectors[idx] += 1
return word_vectors
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, len(vocab)]) # Input
net = tflearn.fully_connected(net, 200, activation='ReLU') # Hidden1
net = tflearn.fully_connected(net, 50, activation='ReLU') # Hidden2
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
15,374 | <ASSISTANT_TASK:>
Python Code:
import os
import sys
import pickle
import numpy as np
from scipy.optimize import curve_fit
import seaborn.apionly as sns
import matplotlib.pyplot as plt
from matplotlib import ticker
sys.path.append(os.path.join(os.environ['EXP_DIR'],'EBTEL_analysis/src'))
import em_binner as emb
%matplotlib inline
plt.rcParams.update({'figure.figsize' : [8,8]})
with open(__depends__[0],'rb') as f:
ebtel_results = pickle.load(f)
with open(__depends__[1],'rb') as f:
hydrad_results = pickle.load(f)
Ta = np.log10(6e+6)
Tb = np.log10(10e+6)
def pl_func(x,a,b):
return a + b*x
tau = [20,40,200,500]
fig = plt.figure()
ax = fig.gca()
for i in range(len(ebtel_results)):
#EBTEL
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=ebtel_results[i]['t'],temp=ebtel_results[i]['T'],
density=ebtel_results[i]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],
linestyle='solid',label=r'$\tau=%d$ $\mathrm{s}$'%tau[i])
#Curve Fitting
logT = np.log10((bin_edges[:-1]+bin_edges[1:])/2)
logem = np.log10(hist/10)
T_fit = logT[(logT>=Ta) & (logT<=Tb)]
em_fit = logem[(logT>=Ta) & (logT<=Tb)]
try:
popt,pcov = curve_fit(pl_func,T_fit,em_fit)
print('Value of the slope for %s is b=%f'%(r'$\tau=%d$ $\mathrm{s}$'%tau[i],popt[1]))
except ValueError:
print('Cannot find fit for %s'%(r'$\tau=%d$ $\mathrm{s}$'%tau[i]))
#HYDRAD
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=hydrad_results['time'],
temp=hydrad_results['single']['tau%ds'%tau[i]]['Te'],
density=hydrad_results['single']['tau%ds'%tau[i]]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],linestyle='dotted')
#aesthetics
#scale
ax.set_yscale('log')
ax.set_xscale('log')
#limits
ax.set_ylim([1e+23,1e+28])
ax.set_xlim([10**5.5,10**7.5])
#ticks
#y
ax.yaxis.set_major_locator(ticker.LogLocator(numticks=5))
#labels
ax.set_xlabel(r'$T\,\,\mathrm{(K)}$')
ax.set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax.legend(loc=2)
#save
plt.savefig(__dest__[0])
plt.show()
fig = plt.figure()
ax = fig.gca()
for i in range(len(ebtel_results)):
#EBTEL
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=ebtel_results[i]['te'],temp=ebtel_results[i]['Tee'],
density=ebtel_results[i]['ne'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],
linestyle='solid',label=r'$\tau=%d$ $\mathrm{s}$'%tau[i])
#HYDRAD
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=hydrad_results['time'],
temp=hydrad_results['electron']['tau%ds'%tau[i]]['Te'],
density=hydrad_results['electron']['tau%ds'%tau[i]]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],linestyle='dotted')
#aesthetics
#scale
ax.set_yscale('log')
ax.set_xscale('log')
#limits
ax.set_ylim([1e+23,1e+28])
ax.set_xlim([10**5.5,10**7.5])
#ticks
#y
ax.yaxis.set_major_locator(ticker.LogLocator(numticks=5))
#labels
ax.set_xlabel(r'$T\,\,\mathrm{(K)}$')
ax.set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax.legend(loc=2)
#save
plt.savefig(__dest__[1])
plt.show()
fig = plt.figure()
ax = fig.gca()
for i in range(len(ebtel_results)):
#EBTEL
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=ebtel_results[i]['ti'],temp=ebtel_results[i]['Tie'],
density=ebtel_results[i]['ni'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],
linestyle='solid',label=r'$\tau=%d$ $\mathrm{s}$'%tau[i])
#HYDRAD
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=hydrad_results['time'],
temp=hydrad_results['ion']['tau%ds'%tau[i]]['Te'],
density=hydrad_results['ion']['tau%ds'%tau[i]]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],linestyle='dotted')
#aesthetics
#scale
ax.set_yscale('log')
ax.set_xscale('log')
#limits
ax.set_ylim([1e+23,1e+28])
ax.set_xlim([10**5.5,10**7.5])
#ticks
#y
ax.yaxis.set_major_locator(ticker.LogLocator(numticks=5))
#labels
ax.set_xlabel(r'$T\,\,\mathrm{(K)}$')
ax.set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax.legend(loc=2)
#save
plt.savefig(__dest__[2])
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, load the data for the EBTEL and HYDRAD results.
Step2: We'll some very basic curve fitting on a couple of our $\mathrm{EM}$ distributions so set the parameters for that.
Step3: Define some parameters for labeling
Step4: Single-fluid, Figure 1(b)
Step5: Electron Heating, Figure 3(b)
Step6: Ion Heating, Figure 5(b)
|
15,375 | <ASSISTANT_TASK:>
Python Code:
# Setup the Sci-pype environment
import sys, os
# Only redis is needed for this notebook:
os.environ["ENV_DEPLOYMENT_TYPE"] = "JustRedis"
# Load the Sci-pype PyCore as a named-object called "core" and environment variables
from src.common.load_ipython_env import *
ds_name = "iris_classifier"
data_dir = str(os.getenv("ENV_DATA_DST_DIR", "/opt/work/data/dst"))
if not os.path.exists(data_dir):
os.mkdir(data_dir, 0777)
s3_bucket = "unique-bucket-name-for-datasets" # name this something under your AWS Account (This might be open to the public in the future...stay tuned)
s3_key = "dataset_" + core.to_upper(ds_name) + ".cache.pickle.zlib"
s3_loc = str(s3_bucket) + ":" + str(s3_key)
cache_req = {
"RAName" : "CACHE", # Redis instance name holding the models
"DSName" : str(ds_name), # Dataset name for pulling out of the cache
"S3Loc" : str(s3_loc), # S3 location to store the model file
"DeleteAfter" : False, # Optional delete after upload
"SaveDir" : data_dir, # Optional dir to save the model file - default is ENV_DATA_DST_DIR
"TrackingID" : "" # Future support for using the tracking id
}
upload_results = core.ml_upload_cached_dataset_to_s3(cache_req, core.get_rds(), core.get_dbs(), debug)
if upload_results["Status"] == "SUCCESS":
lg("Done Uploading Model and Analysis DSName(" + str(ds_name) + ") S3Loc(" + str(cache_req["S3Loc"]) + ")", 6)
else:
lg("", 6)
lg("ERROR: Failed Upload Model and Analysis Caches as file for DSName(" + str(ds_name) + ")", 6)
lg(upload_results["Error"], 6)
lg("", 6)
# end of if extract + upload worked
lg("", 6)
lg("Extract and Upload Completed", 5)
lg("", 6)
ds_name = "iris_regressor"
cache_req = {
"RAName" : "CACHE", # Redis instance name holding the models
"DSName" : str(ds_name), # Dataset name for pulling out of the cache
"S3Loc" : str(s3_loc), # S3 location to store the model file
"DeleteAfter" : False, # Optional delete after upload
"SaveDir" : data_dir, # Optional dir to save the model file - default is ENV_DATA_DST_DIR
"TrackingID" : "" # Future support for using the tracking id
}
upload_results = core.ml_upload_cached_dataset_to_s3(cache_req, core.get_rds(), core.get_dbs(), debug)
if upload_results["Status"] == "SUCCESS":
lg("Done Uploading Model and Analysis DSName(" + str(ds_name) + ") S3Loc(" + str(cache_req["S3Loc"]) + ")", 6)
else:
lg("", 6)
lg("ERROR: Failed Upload Model and Analysis Caches as file for DSName(" + str(ds_name) + ")", 6)
lg(upload_results["Error"], 6)
lg("", 6)
sys.exit(1)
# end of if extract + upload worked
lg("", 6)
lg("Extract and Upload Completed", 5)
lg("", 6)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Setup the Request
Step2: Where is the downloaded file getting stored?
Step3: What's the S3 Location (Unique Bucket Name + Key)?
Step4: Build the full request and run it
Step5: 3) Setup the Extract and Upload for the IRIS Regressor Models and Analysis
Step6: 4) Build and Run the Extract + Upload Request
|
15,376 | <ASSISTANT_TASK:>
Python Code:
#Import necessary Python packages
#data analysis tools
import numpy as np
import pandas as pd
import datetime
from dateutil.relativedelta import relativedelta
#plotting tools
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import graphviz
#classification
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, cross_val_predict, StratifiedShuffleSplit
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import make_scorer, precision_score
#Load cleaned data
data = pd.read_pickle('loans_7a_matured')
data.columns
data.drop(list(data.columns)[80:], axis = 1, inplace = True)
data.drop('ApprovalFiscalYear', axis = 1, inplace = True)
data['PIF'].value_counts()
# Percent of loans that defaulted out of paid in full + defaulted loans
print(str(np.round(100*(data['PIF']==0).sum() / len(data), 2)) + '% of matured loans defaulted')
# Select the features
X = data.drop(['PIF'], axis = 1)
# Select the target variable: switch class labels so that "defaulted" is the postive class
# since this is what we really care about
y = (1 - data['PIF'])
class_names = ['Paid in Full', 'Defaulted']
# Set aside a test set
# Random stratified 70-30 split: preserves the original proportion of positive and negative class
# examples in train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = y,
test_size = 0.30, random_state = 101)
# Scale numerical features for logistic regression (with regularization)
from sklearn.preprocessing import StandardScaler
# Get scaling parameters from training data, then apply the scaler to testing data as well
std_scale = StandardScaler().fit(X_train[['TermInMonths', 'JobsSupported', 'SP_to2016', 'SBAGuaranteedApprovalAdj']])
X_train_std = std_scale.transform(X_train[['TermInMonths', 'JobsSupported', 'SP_to2016', 'SBAGuaranteedApprovalAdj']])
X_test_std = std_scale.transform(X_test[['TermInMonths', 'JobsSupported', 'SP_to2016', 'SBAGuaranteedApprovalAdj']])
# Define the model
def_logreg_model = LogisticRegression(class_weight = 'balanced', random_state = 101)
# Train the model on scaled training data
def_logreg_model.fit(X_train_std, y_train)
# Test the model: make predictions on testing data
def_logreg_pred = def_logreg_model.predict(X_test_std)
# Compare model outputs with actual outputs
print(classification_report(def_logreg_pred, y_test))
# Function to display the confusion matrix - original or normalized
import itertools
def plot_confusion_matrix(cm, classes, title = 'Confusion matrix', cmap = plt.cm.Blues, normalize = False):
This function prints and plots the confusion matrix.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap = cmap)
plt.title(title, fontsize = 20)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, fontsize = 20)
plt.yticks(tick_marks, classes, fontsize = 20)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True Label', fontsize = 20)
plt.xlabel('Predicted Label', fontsize = 20)
plt.grid(False)
# Plot confusion matrix without normalization
def_logreg_cm = confusion_matrix(def_logreg_pred, y_test)
plt.figure(figsize = (8,4))
plot_confusion_matrix(def_logreg_cm, classes = class_names, normalize = False,
title = 'Confusion Matrix')# Plot normalized confusion matrix
plt.figure(figsize = (8,4))
plt.figure(figsize = (8,4))
plot_confusion_matrix(def_logreg_cm, classes = class_names, normalize = True,
title = 'Confusion Matrix with Normalization')
# Tune the hyperparameters: vary the regularization paramter
# Try an exhaustive range of values
param_grid = {'C': [0.0001, 0.0005, 0.001, 0.005, 0.1, 0.5, 1, 5, 10, 50, 100, 500, 1000, 5000]}
# 10-fold cross validation on training data to tune C
grid_logmodel = GridSearchCV(LogisticRegression(class_weight = 'balanced'),
param_grid, refit = True, verbose = 1,
cv = StratifiedShuffleSplit(n_splits = 10, test_size = 0.3, random_state = 101))
grid_logmodel.fit(X_train_std, y_train)
# See the chosen optimal parameter
grid_logmodel.best_params_
# Train and test a simple decision tree with random stratified split and 10-fold cross validation.
# Better accuracy could be achieved without the max_depth and min_samples_leaf constraints,
# but we will aim for simplicity here (to avoid overfitting and it easier to visualize).
# Decision trees do not need features to be scaled. For easier interpretability, we will go back to original data.
dtree = DecisionTreeClassifier(max_depth = 4, min_samples_leaf = 5, class_weight = 'balanced')
# Fit/train the model
dtree.fit(X_train, y_train)
# Test the model
dtree_pred = dtree.predict(X_test)
# Display results
print(classification_report(dtree_pred, y_test))
#Plot normalized confusion matrix
dtree_cm = confusion_matrix(dtree_pred, y_test)
plt.figure(figsize = (8,4))
plot_confusion_matrix(dtree_cm, classes = class_names, normalize = True,
title = 'Normalized Confusion Matrix')
# Visualize the tree
dot_data = tree.export_graphviz(dtree, out_file = None,
feature_names = X_train.columns,
class_names = ['DEF', 'PIF'],
filled = True, rounded = True,
special_characters = True)
graph = graphviz.Source(dot_data)
graph.render("dec_tree_simple")
graph
# Term in months seems to be the most important from the Decision Tree
fig = plt.figure(figsize = (5,3), facecolor = 'gainsboro')
sns.set_context('poster', font_scale = 1.2)
g = sns.factorplot(x = 'PIF', y = 'TermInMonths',
kind = 'bar', data = data, estimator = np.mean, palette = 'Set1' )
g.set_xticklabels(['Defaulted', 'Repaid'])
sns.plt.xlabel('')
sns.plt.title('7A Matured Loans')
sns.plt.ylabel('Mean Term in Months')
g.savefig('Term.png', dpi = 300)
# Train and test a Random Forest classifier with default hyperparamters first (use 300 estimators to start with)
df_rf_n300 = RandomForestClassifier(n_estimators = 300, class_weight = 'balanced', n_jobs=-1)
# Fit/train the model
df_rf_n300.fit(X_train, y_train)
# Test the model: make predictions on the test set
df_rf_n300_pred = df_rf_n300.predict(X_test)
print(classification_report(y_test, df_rf_n300_pred))
# Train and test a Random Forest classifier with default hyperparamters first - use 100 estimators
df_rf_n100 = RandomForestClassifier(n_estimators = 100, class_weight = 'balanced', n_jobs=-1)
# Fit/train the model
df_rf_n100.fit(X_train, y_train)
# Test the model: make predictions on the test set
df_rf_n100_pred = df_rf_n100.predict(X_test)
print(classification_report(y_test, df_rf_n100_pred))
# GridSearch for RF
param_grid = {'max_features': [0.2, 'auto', 'log2'],
'n_estimators': [50, 100], 'min_samples_leaf': [1, 5, 10, 50, 100]}
grid_rf = GridSearchCV(RandomForestClassifier(class_weight = 'balanced', n_jobs = 4),
param_grid, cv = 10, refit = True, verbose = 3, scoring = 'f1')
grid_rf.fit(X_train, y_train)
print(grid_rf.best_params_)
grid_rf_pred = grid_rf.predict(X_test)
print(classification_report(grid_rf_pred, y_test))
opt_rf = RandomForestClassifier(n_estimators = 100, max_features = 0.2, min_samples_leaf = 1, class_weight='balanced')
opt_rf.fit(X_train, y_train)
opt_rf_pred = opt_rf.predict(X_test)
print(classification_report(opt_rf_pred, y_test))
#Feature ranking for random forest
fig = plt.figure(figsize = (10,5))
importances = opt_rf.feature_importances_
std = np.std([tree.feature_importances_ for tree in opt_rf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
fts = list(X_train.columns)
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %s (%f)" % (f + 1, fts[indices[f]], importances[indices[f]]))
# Plot the top ten feature importances of the optimized random forest method
importances = opt_rf.feature_importances_
std = np.std([tree.feature_importances_ for tree in opt_rf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Plot the feature importances of the forest
fig = plt.figure(figsize = (10,5))
plt.title("Feature Importances")
plt.barh(range(10), importances[indices][0:10][::-1],
color="r", xerr = std[indices][0:10][::-1], align="center")
# If you want to define your own labels,
# change indices to a list of labels on the following line.
plt.yticks(range(10), ['Term in Months', 'Amount', 'S&P 1500', 'Jobs Supported', 'Revolver Status',
'Individual', 'CA', 'Retail Trade', 'Construction', 'FL'][::-1])
plt.yticks(range(10))
plt.ylim([-1,10])
plt.tight_layout()
fig.savefig('OPTIMAL_RF_FImportance.png', dpi = 300)
# Check if there is a difference in mean duration
import scipy
from scipy.stats import ttest_ind
t, prob = scipy.stats.ttest_ind(data[data['PIF']==0]['TermInMonths'], data[data['PIF']==1]['TermInMonths'] )
print(t, prob)
# Check if there is a difference in mean amount
t, prob = scipy.stats.ttest_ind(data[data['PIF']==0]['SBAGuaranteedApprovalAdj'], data[data['PIF']==1]['SBAGuaranteedApprovalAdj'] )
print(t, prob)
# Check if there is a difference in mean number of jobs
t, prob = scipy.stats.ttest_ind(data[data['PIF']==0]['JobsSupported'], data[data['PIF']==1]['JobsSupported'] )
print(t, prob)
#Examine loans with TermDuration = 84
data_84 = data[data['TermInMonths'] == 84]
data_84.reset_index(inplace=True, drop = True)
data_84['PIF'].value_counts()
# Consider only loans with TermDuration < 84
data_l84 = data[data['TermInMonths'] < 84]
data.reset_index(inplace = True, drop = True)
target_l84 = y = (1 - data[data['TermInMonths'] < 84]['PIF'])
# Split into training and testing data as before
X_train_l84, X_test_l84, y_train_l84, y_test_l84 = train_test_split(data_l84.drop(['TermInMonths', 'PIF'],axis = 1),
target_l84,
stratify = target_l84,
test_size = 0.30, random_state = 101)
# Logistic Regression Classification
logmodel_l84 = LogisticRegression(class_weight = 'balanced')
logmodel_l84.fit(X_train_l84, y_train_l84)
lm_l84_pred = logmodel_l84.predict(X_test_l84)
print(classification_report(y_test_l84, lm_l84_pred))
# GridSearch for Logistic Regression
param_grid = {'C': [0.001, 0.005, 0.1, 0.5, 1, 5, 10, 50, 100, 500, 1000, 5000, 10000]}
grid_l84 = GridSearchCV(LogisticRegression(class_weight = 'balanced'), param_grid, refit=True,verbose=1)
grid_l84.fit(X_train_l84, y_train_l84)
print(grid_l84.best_params_)
grid_l84_pred = grid_l84.predict(X_test_l84)
print(classification_report(y_test_l84, grid_l84_pred))
# GridSearch for Random Forest
param_grid = {'max_features': [0.2, 'auto', 'log2'],
'n_estimators': [50, 100], 'min_samples_leaf': [1, 5, 10, 50, 100]}
grid_rf_l84 = GridSearchCV(RandomForestClassifier(class_weight = 'balanced', n_jobs = 4),
param_grid, cv = 10, refit = True, verbose = 3, scoring = 'f1')
grid_rf_l84.fit(X_train_l84, y_train_l84)
print(grid_rf_l84.best_params_)
# Run RF with optimal paramters
opt_rf_l84 = RandomForestClassifier(class_weight = 'balanced', max_features = 0.2, min_samples_leaf = 5, n_estimators = 100)
opt_rf_l84.fit(X_train_l84, y_train_l84)
opt_rf_l84_pred = opt_rf_l84.predict(X_test_l84)
print(classification_report(y_test_l84, opt_rf_l84_pred))
# Build a random/control models to compare Random Forest performance with
# proportion of positive class in training data
pos_prop = np.sum(y_train_l84)/len(y_train_l84)
expected_pos_in_test = np.round(pos_prop*(len(y_test_l84)))
#control predicts accroding to proportions of positive and negative examples in the training data
zs = np.zeros(len(y_test_l84) - int(expected_pos_in_test)) #zeros
os = np.ones((int(expected_pos_in_test)))
zo = np.concatenate((zs, os))
y_test_control = np.random.permutation(zo)
print(classification_report(y_test_l84, y_test_control))
# Feature ranking for random forest
fig = plt.figure(figsize = (15,5))
importances_rf_l84 = opt_rf_l84.feature_importances_
std = np.std([tree.feature_importances_ for tree in opt_rf_l84.estimators_],
axis=0)
indices_rf_l84 = np.argsort(importances_rf_l84)[::-1]
fts_rf_l84 = list(X_train_l84.columns)
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train_l84.shape[1]):
print("%d. feature %s (%f)" % (f + 1, fts_rf_l84[indices_rf_l84[f]], importances_rf_l84[indices_rf_l84[f]]))
# Plot the feature importances of the forest
importances = opt_rf_l84.feature_importances_
std = np.std([tree.feature_importances_ for tree in opt_rf_l84.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Plot the feature importances of the forest
fig = plt.figure(figsize = (10,5))
plt.title("Feature importances")
plt.barh(range(10), importances[indices][0:10][::-1],
color="r", xerr=std[indices][0:10][::-1], align="center")
# If you want to define your own labels,
# change indices to a list of labels on the following line.
plt.yticks(range(10), ['S&P 1500', 'Amount', 'Jobs Supported', 'Revolver Status',
'CA', 'FL', 'Individual', 'Retail Trade', 'GA', 'Franchise'][::-1])
plt.ylim([-1,10])
plt.tight_layout()
fig.savefig('L84_RF_FImportance.png', dpi = 300)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will remove the approval year information in order to make the model useful for future years (not included in the data).
Step2: Target variable is Loan Status - Paid in Full (PIF) versus Defaulted. Right now
Step3: From here on
Step4: We will start with (L2-regularized) logistic regression with default hyperparameters. We will scale all numeric features prior to model training and testing.
Step5: We will first run logistic regression with default hyperparamters. We will use class_weight = 'balanced' to introduce higher penalty for missclassifying the minority class.
Step7: We should aim to avoid predicting that a loan will be paid in full, when in fact it will default, i.e., we want to detect all of defaults (positive class). False negatives should be important. Therefore, we'll pay particular attention to recall (of the positive/Default class).
Step8: We see that many defaulted loans got labeled as paid in full (actually, as many as were classified correctly). We will try to improve the perofrmance by optimizing hyperparamters using a Grid Search with 10-fold Cross Validation (stratified split) on training data, picking the best model (optimal hyperparameters), and then applying it to the test data. One hyperparameter that is of importance to logistic regression is C - the amount of regularization used.
Step9: Looks like the default parameter was the best choice!
Step10: We can see that, relative to logistic regression, this model is overall doing better - precision of the positive class has significantly improved (from 0.75 to 0.94). However, this is only slightly better in terms of recall (0.50 to 0.57). We would like to further improve recall as well. Let's see what an individual tree looks like.
Step11: From the tree, it looks like the most important features are
Step12: It looks like paid in full loans have, on average, longer duration!
Step13: This is much better compared to both logistic regression and single simple decision tree above. Let's tune the hyperparamters next. We will again use 10-fold cross validation with stratified splits on the training data, pick the optimal parameters, and then apply the selected model to test data.
Step14: A random forest with 100 estimators achieves the same precision and recall as when using 300 estimators, so let's use at most 100 estimators.
Step15: <a id="cell4"></a>
Step16: Since Random Forests are nonlinear classifiers and we cannot tell if these features are having a negative or positive effect, let's try to infer the direction from the original data.
Step17: Looks like duration is significantly longer for paid in full loans.
Step18: Average amount is lower for defaulted loans.
Step19: For loans of duration = 84 months, < 0.5% default!
Step20: Again, Logistic Regression does not seem to be sufficient.
Step21: Random Forest, again, performs much better than Logistic Regression.
Step22: Random (control) model has much lower precision and recall than random forest for loans with duration < 84 months.
|
15,377 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from numpy.linalg import inv
%matplotlib inline
data = loadmat('data_files/Tut7_file1.mat')
locals().update(data)
data.keys()
p, T = z.shape
mu = np.zeros(z.shape)
K = np.zeros((4, 4, T))
V = np.zeros((4, 4, T))
L = np.zeros((4, 4, T))
K[...,0] = L0.dot(B.T.dot(inv(B.dot(L0.dot(B.T)) + Gamma)))
mu[..., [0]] = A.dot(mu0) + K[..., 0].dot(x[:, [0]] - B.dot(A.dot(mu0))) + C.dot(u[..., [0]])
V[..., 0] = (np.eye(4) - K[..., 0].dot(B)).dot(L0)
L[..., 0] = A.dot(V[..., 0].dot(A.T)) + Sigma
for t in range(1, T):
K[...,t] = L[..., t - 1].dot(B.T.dot(inv(B.dot(L[..., t - 1].dot(B.T)) + Gamma)))
mu[..., [t]] = A.dot(mu[..., [t-1]]) + K[..., t].dot(x[:, [t]] - B.dot(A.dot(mu[..., [t-1]]))) + C.dot(u[..., [t]])
V[..., t] = (np.eye(4) - K[..., t].dot(B)).dot(L[..., t-1])
L[..., t] = A.dot(V[..., t].dot(A.T)) + Sigma
plt.plot(mu.T)
plt.plot(z.T, color='red')
V_tilde = np.zeros(V.shape)
mu_tilde = np.zeros(mu.shape)
V_tilde[..., -1] = V[..., -1]
mu_tilde[..., [-1]] = mu[..., [-1]]
for t in range(T - 2, -1, -1):
#print(t)
W = V[..., t].dot(A.T.dot(inv(L[..., t])))
V_tilde[..., t] = V[..., t] + W.dot(V_tilde[..., t+1] - L[..., t]).dot(W.T)
mu_tilde[..., [t]] = mu[..., [t]] + W.dot(mu_tilde[..., [t+1]] - A.dot(mu[..., [t]]))
plt.plot(mu_tilde.T)
plt.plot(z.T, color='red')
print ('Non smoothed result:', np.sum((mu - z).T ** 2))
print('Smoothed result:', np.sum((mu_tilde - z).T ** 2))
print('Ratio, \n', np.sum((mu_tilde - z).T ** 2) / np.sum((mu - z).T ** 2))
plt.plot(x.T)
#plt.plot(x.T[:4, :])
plt.plot(np.diff(x[..., :10]).T)
np.diff(x[..., :4])
T = 99
z = z[:, :-1]
mu = np.zeros(z.shape)
K = np.zeros((4, 4, T))
V = np.zeros((4, 4, T))
L = np.zeros((4, 4, T))
K[...,0] = L0.dot(B.T.dot(inv(B.dot(L0.dot(B.T)) + Gamma)))
mu[..., [0]] = mu0
V[..., 0] = 0
L[..., 0] = L0
for t in range(1, T):
#print(t)
K[...,t] = L[..., t - 1].dot(B.T.dot(inv(B.dot(L[..., t - 1].dot(B.T)) + Gamma)))
mu[..., [t]] = A.dot(mu[..., [t-1]]) + K[..., t].dot(x[:, [t + 1]] - B.dot(A.dot(mu[..., [t-1]]))) + C.dot(u[..., [t]])
V[..., t] = (np.eye(4) - K[..., t].dot(B)).dot(L[..., t-1])
L[..., t] = A.dot(V[..., t].dot(A.T)) + Sigma
plt.plot(mu.T)
plt.plot(z.T, color='red')
np.sum((mu - z)**2)
A.dot(mu[..., [t-1]]) + K[..., t].dot(x[:, [t + 1]] - B.dot(A.dot(mu[..., [t-1]]))) + C.dot(u[..., [t]])
V_tilde = np.zeros(V.shape)
mu_tilde = np.zeros(mu.shape)
V_tilde[..., -1] = V[..., -1]
mu_tilde[..., [-1]] = mu[..., [-1]]
for t in range(T - 2, -1, -1):
W = V[..., t].dot(A.T.dot(inv(L[..., t])))
V_tilde[..., t] = V[..., t] + W.dot(V_tilde[..., t+1] - L[..., t]).dot(W.T)
mu_tilde[..., [t]] = mu[..., [t]] + W.dot(mu_tilde[..., [t+1]] - A.dot(mu[..., [t]]))
plt.plot(mu_tilde.T)
plt.plot(z.T)
print ('Non smoothed result:', np.sum((mu - z).T ** 2))
print('Smoothed result:', np.sum((mu_tilde - z).T ** 2))
print('Ratio, \n', np.sum((mu_tilde - z).T ** 2) / np.sum((mu - z).T ** 2))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We load the variables and initilize the parameters we need
Step2: We run the filter
Step3: We can see a slight offset, we would expect that to be solved with the smoother step
Step4: we can see that the offset is still present and slightly worse
Step5: The predition is clearly following the data more or less correctly, but there is a probelm with the offset, that makes $\tilde{\mu}$ worse than our $\mu$. This should not happen, we would rather expect the opposite.
Step6: After checking the algorithm many times i decided to look at our x to see if there was anything strange. And if you look closely at the first time steps there is some oddity.
Step7: If someone looks at how x varies at the first time step you will see that it is almost constant and than it starts changing. this could explain the offset in our predictions.
Step8: To test my hunch I decided to remove one time step from the data, to make sure that $x_1$ was not used in the prediction.
|
15,378 | <ASSISTANT_TASK:>
Python Code:
import os
import random
from tqdm import tqdm
import matplotlib.pyplot as plt
from skimage import io #Para leer las imágenes
from skimage import img_as_float
%matplotlib inline
ROOT_DIR = os.getcwd()
DATA_DIR = os.path.join(ROOT_DIR, 'german-traffic-signs')
TRAIN_DIR = os.path.join(DATA_DIR, 'training-set')
TEST_DIR = os.path.join(DATA_DIR, 'test-set')
os.path.split(ROOT_DIR)
assert os.path.exists(DATA_DIR)
assert os.path.exists(TRAIN_DIR)
assert os.path.exists(TEST_DIR)
def plot_images_per_class(path):
lista = [(int(folder), len(os.listdir(os.path.join(path,folder))) - 1) for folder in os.listdir(path)]
lista = sorted(lista, key = lambda x: x[0])
xVals = [clase for clase, val in lista]
yVals = [val for clase, val in lista]
fig = plt.figure(figsize=[15,5])
ax = fig.add_subplot(111)
plt.bar(xVals,yVals,width = 0.5)
ax.set_title('Number of pictures per class', color = 'k', alpha = 0.6)
ax.set_xlabel('Class', fontsize = 14)
ax.set_ylabel('Number of pictures', fontsize = 14)
ax.set_axisbelow(True)
plot_images_per_class(TRAIN_DIR)
plot_images_per_class(TEST_DIR)
def plot_random_sample(path,n_fos = 5,n_im = 3, scale = True, values = False):
hor = n_fos
ver = min(n_fos,n_im)
fos = random.sample(os.listdir(path), n_fos)
lista = []
for fo in tqdm(fos):
lista.append([os.path.join(fo,fi) for fi in random.sample(os.listdir(os.path.join(path,fo)),n_im) if fi.endswith('.jpg')])
new_style = {'grid': False}
plt.rc('axes', **new_style)
_, ax = plt.subplots(n_fos, n_im, sharex='col', sharey='row', figsize=(2*n_im, 2*n_fos))
for i, files in enumerate(lista):
for j, file in enumerate(files):
if scale:
img = img_as_float(io.imread(os.path.join(path,file)))
else:
img = io.imread(os.path.join(path,file))
if values:
print(img)
ax[i, j].imshow(img)
ax[i, j].set_title(file)
plt.show()
plot_random_sample(TRAIN_DIR)
plot_random_sample(TEST_DIR)
from sklearn.decomposition import PCA, IncrementalPCA
import numpy as np
def load_data(files, scale = True):
X, y = [], []
for file in tqdm(files):
if file.endswith(".jpg"):
y.append(int(os.path.split(os.path.split(file)[0])[-1]))
if scale:
X.append(img_as_float(io.imread(file)))
else:
X.append(io.imread(file))
return np.array(X), np.array(y)
def get_files_path(path, ext = ".jpg", n = np.inf):
file_paths = []
for folder in tqdm(os.listdir(path)):
files = os.listdir(os.path.join(path,folder))
if len(files) <= n:
file_paths += [os.path.join(path,folder,file) for file in files if file.endswith(ext)]
else:
file_paths += [os.path.join(path,folder, file) for file in random.sample(files,n) if file.endswith(ext)]
return file_paths
train_files = get_files_path(TRAIN_DIR)
X, y = load_data(train_files)
print(X.shape)
print(y.shape)
X = np.reshape(X, newshape = (-1,32*32*3))
print(X.shape)
ipca = IncrementalPCA()
ipca.fit(X)
plt.step(range(1,len(ipca.explained_variance_ratio_)+1), np.cumsum(ipca.explained_variance_ratio_), where='mid',
label='cumulative explained variance')
plt.bar(range(1,len(ipca.explained_variance_ratio_)+1), ipca.explained_variance_ratio_, alpha=0.9, align='center',
label='individual explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nota
Step2: Imágenes por clase en datos de entrenamiento
Step3: Observar algunas de las imágenes
Step4: Análisis de componentes principales (PCA)
|
15,379 | <ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from cycler import cycler
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
from mpl_toolkits.mplot3d import Axes3D
from numpy import pi, cos, sin
figure_size = (5, 4)
default_figure = lambda: plt.subplots(figsize=figure_size)
save_figures = True
def savefig(filename, **kwargs):
if not save_figures:
return
import os
dir_ = 'figures/'
kwargs_ = dict(dpi=300, bbox_inches='tight')
#frameon=True, facecolor='white', transparent=False)
kwargs_.update(kwargs)
plt.savefig(dir_ + filename, **kwargs_)
print('Saved: %s' % (dir_ + filename))
sns.set_style('whitegrid')
r = 1 # (nm) dsDNA radius
δ = 0.34 # (nm) dsDNA base-pair pitch
n = 10.5 # number of bases per turn
Δφ = 132.4 # (degree) minor-grove angle between the two strands backbones
def dye_position(i, l=1.6, λ=0.5, ψ=0):
# global structural params: r, δ, n, Δφ
Δφr = Δφ*pi/180
φ = 2*pi/n # (radiants) rotation for base-pair
Dx = r * cos(φ*i) + λ*( r*cos(φ*i + Δφr) - r*cos(φ*i) ) + l*cos(ψ)*cos(φ*i + 0.5*Δφr)
Dy = r * sin(φ*i) + λ*( r*sin(φ*i + Δφr) - r*sin(φ*i) ) + l*cos(ψ)*sin(φ*i + 0.5*Δφr)
Dz = i*δ + l*sin(ψ)
return np.array([Dx, Dy, Dz])
def plot_dye(P, axes=None, **kws):
kws_ = dict(marker='o', ls='-')
kws_.update(kws)
if axes is None:
fig = plt.figure(figsize=(9, 9))
ax_xy = plt.subplot2grid((2,2), (1,0))
ax_xz = plt.subplot2grid((2,2), (0,0))
ax_yz = plt.subplot2grid((2,2), (0,1))
ax_3d = fig.add_subplot(224, projection='3d')
else:
ax_xy, ax_xz, ax_yz, ax_3d = axes
ax_xy.plot(P[0], P[1], **kws_)
ax_xz.plot(P[0], P[2], **kws_)
ax_yz.plot(P[1], P[2], **kws_)
for ax in (ax_xy, ax_xz):
ax.set_xlabel('x (nm)')
ax_xy.set_ylabel('y (nm)')
ax_xz.set_xlabel('x (nm)')
ax_xz.set_ylabel('z (nm)')
ax_yz.set_xlabel('y (nm)')
ax_yz.set_ylabel('z (nm)')
lim = max(1.5, np.abs(P[0]).max(), np.abs(P[1]).max())*1.05
ax_xy.set_xlim(-lim, lim)
ax_xy.set_ylim(-lim, lim)
ax_xz.set_xlim(-lim, lim)
ax_yz.set_xlim(-lim, lim)
ax_3d.plot(P[0], P[1], P[2], **kws_)
return (ax_xy, ax_xz, ax_yz, ax_3d)
def fret(R, R0):
return 1 / (1 + (R/R0)**6)
λ = 0.5
ψ = 0
i = 7 # number of bases from reference "base 0"
l = 1.6 # (nm) distance between S and dye position D
dye_position(7)
D_params = dict(l=1, λ=1, ψ=0)
A_params = dict(l=1, λ=0, ψ=-pi/2)
bp = np.arange(0, 1)
PD = dye_position(bp, **D_params)
PA = dye_position(bp, **A_params)
bp1 = np.arange(0, 10.1, 0.02)
PD1 = dye_position(bp1, **D_params)
PA1 = dye_position(bp1, **A_params)
axes = plot_dye(PD, marker='s')
plot_dye(PA, axes, color='r', marker='s');
plot_dye(PA1, axes, marker='', ls='-', color='r');
plot_dye(PD1, axes, marker='', ls='-', color='b');
bp = np.arange(0, 40, 0.1)
PD = dye_position(bp, l=1.6, λ=0.2, ψ=0)
PA = dye_position(0, l=1.6, λ=0.8, ψ=-pi/2)
R = np.linalg.norm(PD.T - PA, axis=1)
#R
plt.plot(bp, R);
plt.xlabel('Base-pair')
plt.ylabel('Distance (nm)')
plt.ylim(0);
R0 = 6.7 # nm
plt.plot(bp, fret(R, R0));
E_mspot = pd.read_csv(
'results/Multi-spot - dsDNA - Corrected E - all_samples all_ch.csv', index_col=0)
E_mspot.columns.name = 'Channel'
E_mspot
data_file = 'results/usALEX-5samples-E-corrected-all-ph.csv'
data_alex = pd.read_csv(data_file).set_index('sample')
E_alex = data_alex.E_gauss_w
E_alex.index.name = 'Sample'
E_alex
%config InlineBackend.figure_format='retina' # for hi-dpi displays
fig, ax = plt.subplots()
E_alex.plot(ax=ax)
E_mspot.plot(marker='+', mew=1, ls='none', ax=ax)
E_alexi = E_alex.rename(lambda x: int(x[:-1])).to_frame()
E_alexi.columns = ['μs-ALEX']
E_alexi
E_mspoti = E_mspot.rename(lambda x: int(x[:-1]))
#E_mspoti
sns.set(style='ticks', font_scale=1.4)
R0 = 7.3 # nm Forster Radius
PD = dye_position(bp, l=2, λ=0.6, ψ=0)
PA = dye_position(0, l=1.6, λ=0.4, ψ=-pi/2)
R = np.linalg.norm(PD.T - PA, axis=1)
pitch = δ*n
min_groove_pitch = 1.2
min_groove_pitch/pitch * 360
bp = np.arange(0, 30, 0.2)
bpm = np.array([7, 12, 17, 22, 27])
D_params = dict(l=2.4, λ=0.5, ψ=pi)
A_params = dict(l=2, λ=0.5, ψ=-1.2*pi/2)
n = 10.5 # number of bases per turn
Δφ = 131 # (degree) minor-grove angle between the two strands backbones
R0 = 7.5 # nm Forster Radius
D_params = dict(l=1.28, λ=0.61, ψ=0)
A_params = dict(l=1.28, λ=0.39, ψ=-pi/2)
n = 10.5 # number of bases per turn
Δφ = 132 # (degree) minor-grove angle between the two strands backbones
R0 = 6.7 # nm Forster Radius
D_params = dict(l=1.246, λ=1-0.256, ψ=0)
A_params = dict(l=1.246, λ=0.256, ψ=-pi/2)
n = 10.5 # number of bases per turn
Δφ = 2.31 * (180/np.pi) # (degree) minor-grove angle between the two strands backbones
R0 = 6.7 # nm Forster Radius
PD = dye_position(bp, **D_params)
PA = dye_position(0, **A_params)
R = np.linalg.norm(PD.T - PA, axis=1)
fig, ax = plt.subplots()
E_alexi.plot(ax=ax, marker='s', lw=0, label='usALEX')
#E_mspoti.plot(marker='+', mew=2, ms=10, ls='none', ax=ax)
ax.set_ylim(0)
ax.set_xlim(5, 30)
ax.set_xlabel('D-A Separation (base-pairs)')
ax.set_ylabel('FRET Efficiency')
plt.xticks(E_alexi.index)
sns.despine(trim=True, offset=10, ax=ax)
ax.plot(bp, fret(R, R0), color='gray', alpha=0.5);
#savefig('multi-spot E vs distance.png');
PD = dye_position(bp, **D_params)
PA = dye_position(0, **A_params)
R = np.linalg.norm(PD.T - PA, axis=1)
fig, ax = plt.subplots()
E_alexi.plot(ax=ax, label='usALEX')
E_mspoti.plot(marker='+', mew=2, ms=10, ls='none', ax=ax)
ax.set_ylim(0)
ax.set_xlim(5, 30)
ax.set_xlabel('D-A Separation (base-pairs)')
ax.set_ylabel('FRET Efficiency')
plt.xticks(E_alexi.index)
sns.despine(trim=True, offset=10, ax=ax)
ax.plot(bp, fret(R, R0), color='gray', alpha=0.5);
savefig('multi-spot E vs distance.png');
PD0 = dye_position(bp, l=0, λ=0, ψ=0)
PA0 = dye_position(bp, l=0, λ=1, ψ=0)
PDm = dye_position(bpm, **D_params)
axes = plot_dye(PDm, marker='s', ls='')
plot_dye(PDm[:, :1], axes, color='k', marker='s', ms=10);
plot_dye(PD[:, :1], axes, color='k', marker='o', ms=10);
plot_dye(PA[:, np.newaxis], axes, color='r', marker='s');
plot_dye(PD0, axes, color='g', marker='', ls='-');
plot_dye(PA0, axes, color='m', marker='', ls='-');
plot_dye(PD0[:, :1], axes, color='g', marker='o');
plot_dye(PA0[:, :1], axes, color='m', marker='o');
t = np.arange(361)
axes[0].plot(cos(t/180*pi), sin(t/180*pi), lw=1, color='gray');
plot_dye(PD, axes, marker='', ls='-', color='b');
# leg = ax[1].get_legend()
# h, l = ax[1].get_legend_handles_labels()
# ax[1].legend(h[1:] + h[:1], l[1:] + l[:1], title='Sample', loc='lower right')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: B-DNA Model
Step2: <img src="figures/DNA1.png" style="width
Step3: Function to plot the dye position
Step4: FRET as a function of $R$ and $R_0$
Step5: Dyes Parameters
Step6: Load data
Step7: FRET vs distance figure
|
15,380 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import zipfile
with zipfile.ZipFile('../datasets/glass.csv.zip', 'r') as z:
f = z.open('glass.csv')
glass = pd.read_csv(f, sep=',', index_col=0)
glass.head()
%matplotlib inline
import matplotlib.pyplot as plt
# scatter plot using Pandas
glass.plot(kind='scatter', x='al', y='ri')
# equivalent scatter plot using Matplotlib
plt.scatter(glass.al, glass.ri)
plt.xlabel('al')
plt.ylabel('ri')
# fit a linear regression model
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
feature_cols = ['al']
X = glass[feature_cols]
y = glass.ri
linreg.fit(X, y)
# make predictions for all values of X
glass['ri_pred'] = linreg.predict(X)
glass.head()
# put the plots together
plt.scatter(glass.al, glass.ri)
plt.plot(glass.al, glass.ri_pred, color='red')
plt.xlabel('al')
plt.ylabel('ri')
# compute prediction for al=2 using the equation
linreg.intercept_ + linreg.coef_ * 2
# compute prediction for al=2 using the predict method
linreg.predict(2)
# examine coefficient for al
print(feature_cols, linreg.coef_)
# increasing al by 1 (so that al=3) decreases ri by 0.0025
1.51699012 - 0.0024776063874696243
# compute prediction for al=3 using the predict method
linreg.predict(3)
# examine glass_type
glass.glass_type.value_counts().sort_index()
# types 1, 2, 3 are window glass
# types 5, 6, 7 are household glass
glass['household'] = glass.glass_type.map({1:0, 2:0, 3:0, 5:1, 6:1, 7:1})
glass.head()
plt.scatter(glass.al, glass.household)
plt.xlabel('al')
plt.ylabel('household')
# fit a linear regression model and store the predictions
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
linreg.fit(X, y)
glass['household_pred'] = linreg.predict(X)
# scatter plot that includes the regression line
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred, color='red')
plt.xlabel('al')
plt.ylabel('household')
# understanding np.where
import numpy as np
nums = np.array([5, 15, 8])
# np.where returns the first value if the condition is True, and the second value if the condition is False
np.where(nums > 10, 'big', 'small')
# transform household_pred to 1 or 0
glass['household_pred_class'] = np.where(glass.household_pred >= 0.5, 1, 0)
glass.head()
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
# fit a logistic regression model and store the class predictions
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e9)
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
logreg.fit(X, y)
glass['household_pred_class'] = logreg.predict(X)
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
# store the predicted probabilites of class 1
glass['household_pred_prob'] = logreg.predict_proba(X)[:, 1]
# plot the predicted probabilities
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# examine some example predictions
print(logreg.predict_proba(1))
print(logreg.predict_proba(2))
print(logreg.predict_proba(3))
# create a table of probability versus odds
table = pd.DataFrame({'probability':[0.1, 0.2, 0.25, 0.5, 0.6, 0.8, 0.9]})
table['odds'] = table.probability/(1 - table.probability)
table
# exponential function: e^1
np.exp(1)
# time needed to grow 1 unit to 2.718 units
np.log(2.718)
np.log(np.exp(5))
# add log-odds to the table
table['logodds'] = np.log(table.odds)
table
# plot the predicted probabilities again
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# compute predicted log-odds for al=2 using the equation
logodds = logreg.intercept_ + logreg.coef_[0] * 2
logodds
# convert log-odds to odds
odds = np.exp(logodds)
odds
# convert odds to probability
prob = odds/(1 + odds)
prob
# compute predicted probability for al=2 using the predict_proba method
logreg.predict_proba(2)[:, 1]
# examine the coefficient for al
feature_cols, logreg.coef_[0]
# increasing al by 1 (so that al=3) increases the log-odds by 4.18
logodds = 0.64722323 + 4.1804038614510901
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
# compute predicted probability for al=3 using the predict_proba method
logreg.predict_proba(3)[:, 1]
# examine the intercept
logreg.intercept_
# convert log-odds to probability
logodds = logreg.intercept_
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: Refresher
Step3: Interpretation
Step4: Predicting a Categorical Response
Step5: Let's change our task, so that we're predicting household using al. Let's visualize the relationship to figure out how to do this
Step6: Let's draw a regression line, like we did before
Step7: If al=3, what class do we predict for household? 1
Step8: $h_\beta(x)$ can be lower 0 or higher than 1, which is countra intuitive
Step9: What if we wanted the predicted probabilities instead of just the class predictions, to understand how confident we are in a given prediction?
Step10: The first column indicates the predicted probability of class 0, and the second column indicates the predicted probability of class 1.
Step11: What is e? It is the base rate of growth shared by all continually growing processes
Step12: What is a (natural) log? It gives you the time needed to reach a certain level of growth
Step13: It is also the inverse of the exponential function
Step14: What is Logistic Regression?
Step15: Interpretation
Step16: Bottom line
Step17: Interpretation
|
15,381 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q --use-deprecated=legacy-resolver tflite-model-maker
!pip install -q pycocotools
import numpy as np
import os
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
from absl import logging
logging.set_verbosity(logging.ERROR)
spec = model_spec.get('efficientdet_lite0')
train_data, validation_data, test_data = object_detector.DataLoader.from_csv('gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv')
model = object_detector.create(train_data, model_spec=spec, batch_size=8, train_whole_model=True, validation_data=validation_data)
model.evaluate(test_data)
model.export(export_dir='.')
model.evaluate_tflite('model.tflite', test_data)
#@title Load the trained TFLite model and define some visualization functions
import cv2
from PIL import Image
model_path = 'model.tflite'
# Load the labels into a list
classes = ['???'] * model.model_spec.config.num_classes
label_map = model.model_spec.config.label_map
for label_id, label_name in label_map.as_dict().items():
classes[label_id-1] = label_name
# Define a list of colors for visualization
COLORS = np.random.randint(0, 255, size=(len(classes), 3), dtype=np.uint8)
def preprocess_image(image_path, input_size):
Preprocess the input image to feed to the TFLite model
img = tf.io.read_file(image_path)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.uint8)
original_image = img
resized_img = tf.image.resize(img, input_size)
resized_img = resized_img[tf.newaxis, :]
resized_img = tf.cast(resized_img, dtype=tf.uint8)
return resized_img, original_image
def detect_objects(interpreter, image, threshold):
Returns a list of detection results, each a dictionary of object info.
signature_fn = interpreter.get_signature_runner()
# Feed the input image to the model
output = signature_fn(images=image)
# Get all outputs from the model
count = int(np.squeeze(output['output_0']))
scores = np.squeeze(output['output_1'])
classes = np.squeeze(output['output_2'])
boxes = np.squeeze(output['output_3'])
results = []
for i in range(count):
if scores[i] >= threshold:
result = {
'bounding_box': boxes[i],
'class_id': classes[i],
'score': scores[i]
}
results.append(result)
return results
def run_odt_and_draw_results(image_path, interpreter, threshold=0.5):
Run object detection on the input image and draw the detection results
# Load the input shape required by the model
_, input_height, input_width, _ = interpreter.get_input_details()[0]['shape']
# Load the input image and preprocess it
preprocessed_image, original_image = preprocess_image(
image_path,
(input_height, input_width)
)
# Run object detection on the input image
results = detect_objects(interpreter, preprocessed_image, threshold=threshold)
# Plot the detection results on the input image
original_image_np = original_image.numpy().astype(np.uint8)
for obj in results:
# Convert the object bounding box from relative coordinates to absolute
# coordinates based on the original image resolution
ymin, xmin, ymax, xmax = obj['bounding_box']
xmin = int(xmin * original_image_np.shape[1])
xmax = int(xmax * original_image_np.shape[1])
ymin = int(ymin * original_image_np.shape[0])
ymax = int(ymax * original_image_np.shape[0])
# Find the class index of the current object
class_id = int(obj['class_id'])
# Draw the bounding box and label on the image
color = [int(c) for c in COLORS[class_id]]
cv2.rectangle(original_image_np, (xmin, ymin), (xmax, ymax), color, 2)
# Make adjustments to make the label visible for all objects
y = ymin - 15 if ymin - 15 > 15 else ymin + 15
label = "{}: {:.0f}%".format(classes[class_id], obj['score'] * 100)
cv2.putText(original_image_np, label, (xmin, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
# Return the final image
original_uint8 = original_image_np.astype(np.uint8)
return original_uint8
#@title Run object detection and show the detection results
INPUT_IMAGE_URL = "https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg" #@param {type:"string"}
DETECTION_THRESHOLD = 0.3 #@param {type:"number"}
TEMP_FILE = '/tmp/image.png'
!wget -q -O $TEMP_FILE $INPUT_IMAGE_URL
im = Image.open(TEMP_FILE)
im.thumbnail((512, 512), Image.ANTIALIAS)
im.save(TEMP_FILE, 'PNG')
# Load the TFLite model
interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# Run inference and draw detection result on the local copy of the original file
detection_result_image = run_odt_and_draw_results(
TEMP_FILE,
interpreter,
threshold=DETECTION_THRESHOLD
)
# Show the detection result
Image.fromarray(detection_result_image)
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
NUMBER_OF_TPUS = 1#@param {type:"number"}
!edgetpu_compiler model.tflite --num_segments=$NUMBER_OF_TPUS
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Object Detection with TensorFlow Lite Model Maker
Step2: Import the required packages.
Step3: Prepare the dataset
Step4: Step 2. Load the dataset.
Step5: Step 3. Train the TensorFlow model with the training data.
Step6: Step 4. Evaluate the model with the test data.
Step7: Step 5. Export as a TensorFlow Lite model.
Step8: Step 6. Evaluate the TensorFlow Lite model.
Step12: You can download the TensorFlow Lite model file using the left sidebar of Colab. Right-click on the model.tflite file and choose Download to download it to your local computer.
Step13: (Optional) Compile For the Edge TPU
Step 1. Install the EdgeTPU Compiler
Step14: Step 2. Select number of Edge TPUs, Compile
|
15,382 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import metapack as mp
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import geopandas as gpd
regions = mp.open_package('http://library.metatab.org/sandiego.gov-police_regions-1.csv')
regions
calls_p = mp.open_package('http://library.metatab.org/sandiego.gov-police_calls-2015e-1.csv')
calls_p
calls_r = calls_p.resource('pd_calls')
calls_r
call_type_r = calls_p.resource('call_type')
call_types = call_type_r.dataframe().rename(columns={'description':'call_type_desc'})
call_types.head()
regions_r = regions.resource('pd_beats')
regions_r
# The beats.cx[:-116.8,:] bit indexes the bounding box to exclude the empty portion of the
# county. San Diego owns the footprint of a dam in east county, which displays as a tiny
# dot in the middle of empty space.
# Note that this isn't actually defininf the bounding box; it's cutting out far-east regions,
# and then GeoPandas creates the smaller bounding box that excludes them. So, the actually
# value in the cx indexder can vary a bit.
# Converting to float makes merging with the calls df ewasier, since the beat column
# in that df has nans.
beats = regions_r.dataframe().geo
beats['beat'] = beats.beat.astype(float)
beats = beats.set_index('beat').cx[:-116.55,:]
beats.plot();
pd_calls = calls_r.read_csv(low_memory=False)
def augment_time(df):
df['date_time'] = pd.to_datetime(df.date_time)
df['hour'] = df.date_time.dt.hour
df['month'] = df.date_time.dt.month
df['year'] = df.date_time.dt.year
df['dayofweek'] = df.date_time.dt.dayofweek
df['weekofyear'] = df.date_time.dt.weekofyear
df['weekofdata'] = (df.year-df.year.min())*52+df.date_time.dt.weekofyear
df['monthofdata'] = (df.year-df.year.min())*12+df.date_time.dt.month
return df
assert pd_calls.call_type.dtype == call_types.call_type.dtype
pd_calls = augment_time(pd_calls).merge(call_types, on='call_type')
pd_calls['beat'] = pd_calls.beat.astype(float)
pd_calls = pd_calls.merge(beats.reset_index()[['beat', 'name']], on='beat')\
.rename(columns={'name':'beat_name'})
def plot_geo(df, color_col, title=None):
# Need to get aspect right or it looks wacky
bb = beats.total_bounds
aspect = (bb[3] - bb[1])/ (bb[2]-bb[0])
x_dim = 8
fig = plt.figure(figsize = (x_dim,x_dim*aspect))
ax = fig.add_subplot(111)
df.plot(ax=ax,column=color_col, cmap='RdYlGn_r',
scheme='fisher_jenks', legend=True);
if title:
fig.suptitle(title, fontsize=18);
leg = ax.get_legend()
#leg.set_bbox_to_anchor((0., 1.02, 1., .102))
leg.set_bbox_to_anchor((1,.5))
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
_ = gpd.GeoDataFrame(pd_calls.groupby('beat').incident_num.count().to_frame()\
.join(beats))
plot_geo(_, 'incident_num', 'Incidents Per Beat, 2015 to Aug 2017')
pd_calls.call_type_desc.value_counts().iloc[10:30]
_ = gpd.GeoDataFrame(pd_calls[pd_calls.call_type_desc == 'LOUD PARTY']
.groupby('beat')
.incident_num.count().to_frame()\
.join(beats))
plot_geo(_, 'incident_num', "LOUD PARTY calls, 2015 to Aug 2017")
_ = gpd.GeoDataFrame(pd_calls[pd_calls.call_type_desc == 'ILLEGAL PARKING']
.groupby('beat')
.incident_num.count().to_frame()\
.join(beats))
plot_geo(_, 'incident_num', "ILLEGAL PARKING calls, 2015 to Aug 2017")
_ = pd_calls[pd_calls.call_type_desc == 'BATTERY']\
.groupby('beat')\
.incident_num.count().to_frame()\
.join(beats)
plot_geo(gpd.GeoDataFrame(_), 'incident_num', "BATTERY calls, 2015 to Aug 2017")
_.sort_values('incident_num', ascending=False).head(10)
# Could also get the beats by name.
pb_beat = beats[beats.name=='PACIFIC BEACH'].index.values[0]
gas_beat = beats[beats.name=='GASLAMP'].index.values[0]
low_crime = _.sort_values('incident_num', ascending=False).iloc[5:]
_lc = _.loc[list(low_crime.index.values)]
plot_geo(gpd.GeoDataFrame(_lc), 'incident_num',
"BATTERY calls, 2015 to Aug 2017, Lower Crime Areas")
_ = gpd.GeoDataFrame(pd_calls[pd_calls.call_type_desc == 'BATTERY']
.groupby('beat')
.incident_num.count().to_frame()\
.join(beats))
plot_geo(_, 'incident_num', "BATTERY calls, 2015 to Nov 2017")
_ = gpd.GeoDataFrame(pd_calls[pd_calls.call_type_desc == 'MENTAL CASE']
.groupby('beat')
.incident_num.count().to_frame()\
.join(beats))
plot_geo(_, 'incident_num', "MENTAL CASE calls, 2015 to Aug 2017")
pb_beat = beats[beats.name=='PACIFIC BEACH'].index.values[0]
_ = pd_calls[(pd_calls.call_type_desc=='LOUD PARTY') & (pd_calls.beat == pb_beat)]
ht = pd.pivot_table(data=_,
values='incident_num', index=['hour'],columns=['dayofweek'],
aggfunc='count')
fig, ax = plt.subplots(figsize=(6,6))
sns.heatmap(ht, ax=ax);
pb_beat = beats[beats.name=='PACIFIC BEACH'].index.values[0]
_ = pd_calls[(pd_calls.call_type_desc=='LOUD PARTY') & (pd_calls.beat == pb_beat)]
fig, ax = plt.subplots(figsize=(8,8))
fig.suptitle("LOUD PARTY Calls In Pacific Beach\n2015 to Aug 2017\nBy Hour and Month", fontsize=18);
sns.heatmap(ht, ax=ax);
hm_beats = pd_calls[['beat_name', 'hour','month']].copy()
hm_beats['count'] = 1
hm_beats = hm_beats.groupby(['beat_name', 'hour','month']).count().reset_index()
# Top 16 beats
top_beats= pd_calls.beat_name.value_counts().index.values[:16]
from IPython.display import display
# select only the rows for the top 16 beats
_ = hm_beats[hm_beats.beat_name.isin(top_beats)]
g = sns.FacetGrid(_, col="beat_name", col_wrap=4)
def facet_heatmap(data, color, **kwargs):
ht = data.pivot(index="hour", columns='month', values='count')
sns.heatmap(ht, cmap='Reds', **kwargs)
#cbar_ax = g.fig.add_axes([.92, .3, .02, .4]) # Create a colorbar axes
with sns.plotting_context(font_scale=3.5):
g = g.map_dataframe(facet_heatmap)
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
g.fig.suptitle("LOUD PARTY Calls By Month of Year, By Hour of Day, By Beat",
fontsize=18);
_ = pd_calls[pd_calls.beat_name.isin(['SAN YSIDRO'])]
ax = sns.kdeplot(_.month, _.hour, shade=True)
ax.invert_yaxis()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Metapack system allows for packaging data, long with all of the metadata, and the open_package function can be used to load packages off the web. The the URL below is to a CSV package, which just referrs to CSV files on the web. You can get the link by to the CSV package file from the resource for sandiego.gov-police_regions-1 in the SRDRL data library repository page for the package
Step2: After opening packages, we can ask the package for what resources it has, download those resources, and turn them into Pandas dataframes.
Step3: There are a lot of interesting patterns in crime data when you create heat maps of two time dimensions, a visualization called a "Rhythm Map". We'll add the time dimensions now for use later.
Step4: Incident Count Maps
Step5: Sometimes, very high density areas like PB and Downtown will obscure patterns in other areas. One of the ways to handle this is to just exclude those areas. First, let's locate which are the highest crime area.
Step6: Here is the map excluding the top 5 high crime areas. The excluded areas are omitted completely, shown in white.
Step7: Rhythm Maps
Step8: Looking at the hour of day versus month, there is a clear seasonal pattern, with fewer loud party calls during the winter.
Step9: Small Multiple Rhythm Maps
Step10: A KDE Plot can show similar information to a heat map, but with a very different algorithms ( See Kernel Density Esimators for more information ). This view of the San Ysidro map shows the 3
|
15,383 | <ASSISTANT_TASK:>
Python Code:
df_null_idx = df[df.isnull().sum(axis = 1) > 0].index
df.iloc[df_null_idx]
median_values = df.groupby("State")[["R&D Spend", "Marketing Spend"]].median()
median_values
df["R&D Spend"] = df.apply(lambda row: median_values.loc[row["State"], "R&D Spend"] if np.isnan(row["R&D Spend"]) else row["R&D Spend"], axis = 1 )
df["Marketing Spend"] = df.apply(lambda row: median_values.loc[row["State"], "Marketing Spend"] if np.isnan(row["Marketing Spend"]) else row["Marketing Spend"], axis = 1 )
df.iloc[df_null_idx]
# Check if there are any more null values.
df.isnull().sum()
plt.figure(figsize = (8, 6))
plt.subplot(2, 1, 1)
df.Profit.plot.hist(bins = 10, normed = True)
df.Profit.plot.kde(title = "Historgram of Profit")
plt.subplot(2, 1, 2)
df.Profit.plot.box(vert = False, title = "Boxplot of Profit")
plt.tight_layout()
sns.pairplot(df)
df.groupby("State").Profit.mean().sort_values().plot.bar(title = "Avg Profit by State")
plt.xlabel("State")
plt.ylabel("Profit")
y = df.Profit.values
y
df_features = df.iloc[:, 0:4]
df_dummied = pd.get_dummies(df_features, columns=["State"], drop_first=True)
df_dummied.sample(10)
X = df_dummied.values
X[0, :]
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
pd.DataFrame(X_std).head()
X_train, X_test, y_train, y_test = train_test_split(X_std, y,
test_size = 0.3, random_state = 100)
print("Training set: ", X_train.shape, y_train.shape)
print("Test set: ", X_test.shape, y_test.shape)
X_train.shape[0] / df.shape[0]
lr = LinearRegression()
lr.fit(X_train, y_train)
lr.intercept_, lr.coef_
y_test_pred = lr.predict(X_test)
output = pd.DataFrame({"actual": y_test, "prediction": y_test_pred})
output["error"] = output.actual - output.prediction
output
X_test_inv = scaler.inverse_transform(X_test)
plt.scatter(X_test_inv[:, 0], y_test, alpha = 0.3, c = "blue", label = "Actual")
plt.scatter(X_test_inv[:, 0], y_test_pred, c = "red", label = "Predicted")
plt.xlabel("R&D Spend")
plt.ylabel("Profit")
plt.title("Profit Actual vs Estimate")
plt.legend()
np.mean((y_test_pred - y_test) ** 2)
y_train_pred = lr.predict(X_train)
print("Test rmse: ", sqrt(mean_squared_error(y_test, y_test_pred)),
"\nTraining rmse:", sqrt(mean_squared_error(y_train, y_train_pred)))
r2_score(y_test, y_test_pred), r2_score(y_train, y_train_pred)
SSR = np.sum((y_train - y_train_pred) ** 2) # Sum of squared residuals
SST = np.sum((y_train - np.mean(y_train_pred)) ** 2) # Sum of squared totals
R2 = 1 - SSR/SST
R2
from sklearn.feature_selection import f_regression
_, p_vals = f_regression(X_train, y_train)
p_vals
pd.DataFrame({"feature": df_dummied.columns, "p_value": p_vals})
df = pd.read_csv("/data/Combined_Cycle_Power_Plant.csv")
df.head()
X = df.iloc[:, 0:4].values
y = df.PE.values
sns.pairplot(df)
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size = 0.3, random_state = 1)
def rmse(y_true, y_pred):
return sqrt(mean_squared_error(y_true, y_pred))
lr = LinearRegression(normalize=False)
lr.fit(X_train, y_train)
y_train_pred = lr.predict(X_train)
y_test_pred = lr.predict(X_test)
rmse(y_test, y_test_pred)
from scipy import stats
residuals = y_test - y_test_pred
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.scatter(y_test, residuals)
plt.xlabel("y_test")
plt.ylabel("Residuals")
plt.hlines([0], xmin = 420, xmax = 500, linestyles = "dashed")
plt.subplot(1, 2, 2)
stats.probplot(residuals, plot=plt)
poly = PolynomialFeatures(degree=2)
X = df.iloc[:, 0:4].values
X_poly = poly.fit_transform(X)
X_poly_train, X_poly_test, y_train, y_test = train_test_split(X_poly, y, test_size = 0.3, random_state = 100)
X_poly_train_std = scaler.fit_transform(X_poly_train)
X_poly_test_std = scaler.transform(X_poly_test)
pd.DataFrame(X_poly_train_std).head()
lr.fit(X_poly_train_std, y_train)
print("Train rmse: ", rmse(y_train, lr.predict(X_poly_train_std)))
print("Test rmse: ", rmse(y_test, lr.predict(X_poly_test_std)))
print(lr.intercept_, lr.coef_)
lasso = Lasso(alpha=0.03, max_iter=10000, normalize=False, random_state=100)
lasso.fit(X_poly_train_std, y_train)
print("Train rmse: ", rmse(y_train, lasso.predict(X_poly_train_std)))
print("Test rmse: ", rmse(y_test, lasso.predict(X_poly_test_std)))
print(lasso.intercept_, lasso.coef_)
X_poly_std = scaler.fit_transform(X_poly)
lasso = Lasso(alpha=0.03, max_iter=10000, random_state=100)
scores = cross_val_score(lasso, X_poly_std, y, cv = 10, scoring="neg_mean_squared_error")
scores = np.sqrt(-scores)
print("RMSE scores", scores)
print("Mean rmse: ", np.mean(scores))
from sklearn.pipeline import Pipeline
pipeline = Pipeline(steps = [
("poly", PolynomialFeatures(degree=2, include_bias=False)),
("scaler", StandardScaler()),
("lasso", Lasso(alpha=0.03, max_iter=10000, normalize=False, random_state=1))
])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 1)
pipeline.fit(X_train, y_train)
rmse(y_test, pipeline.predict(X_test))
# Find best alpha
lassocv = LassoCV(cv = 10, max_iter=10000, tol=1e-5)
lassocv.fit(X_poly_std, y)
print("Lassocv alpha: ", lassocv.alpha_)
# Apply the best alpha to find cross validation score
lasso = Lasso(alpha = lassocv.alpha_, max_iter=10000, random_state=100)
scores = cross_val_score(lasso, X_poly_std, y, cv = 10, scoring="neg_mean_squared_error")
print("Mean rmse: ", np.mean(np.sqrt(-scores)))
coefs = []
alphas = 10 ** np.linspace(-5, 5, 20)
for alpha in alphas:
lasso = Lasso(alpha=alpha, max_iter=10000, tol=1e-5,random_state=100)
lasso.fit(X_poly_std, y)
coefs.append(lasso.coef_)
plt.plot(alphas, coefs)
plt.xscale("log")
plt.xlabel("Alpha (penalty term on the coefficients)")
plt.ylabel("Coefficients of the features")
poly = PolynomialFeatures(degree=2)
X = df.iloc[:, 0:4].values
X_poly = poly.fit_transform(X)
X_poly_train, X_poly_test, y_train, y_test = train_test_split(X_poly, y, test_size = 0.3, random_state = 100)
X_poly_train_std = scaler.fit_transform(X_poly_train)
X_poly_test_std = scaler.transform(X_poly_test)
gbm = xgb.XGBRegressor(max_depth=10, learning_rate=0.1, n_estimators=100,
objective='reg:linear', booster='gbtree',
reg_alpha=0.01, reg_lambda=1, random_state=0)
gbm.fit(X_poly_train_std, y_train)
print("rmse:", rmse(y_test, gbm.predict(X_poly_test_std)))
param = {'silent':1,
'objective':'reg:linear',
'booster':'gbtree',
'alpha': 0.01,
'lambda': 1
}
dtrain = xgb.DMatrix(X_poly_train_std, label=y_train)
dtest = xgb.DMatrix(X_poly_test_std, label=y_test)
watchlist = [(dtrain,'eval'), (dtest, 'train')]
num_round = 100
bst = xgb.train(param, dtrain, num_round, watchlist, verbose_eval=False)
print("rmse:", rmse(y_test, bst.predict(dtest)))
plt.figure(figsize=(8, 10))
xgb.plot_importance(bst)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's see the distribution of the Profit using a histogram plot and see if there is any outliers in the data using bosplot.
Step2: Profit has one outlier. We can try to take log scale to remove the outlier value before doing any prediction. But for now, let ignore the outlier.
Step3: Displays only the numeric column. Let's how the avg Profit plays for each State.
Step4: Avg Profit is highest in state of Florida and least in California.
Step5: Create dummy variables for categorical feature.
Step6: State column has been replaced by two additional column - one for Florida and one NY. First value in the categorical values CA has been dropped to avoid collinearity issue.
Step7: Let's normalize the feature values to bring them to a similar scale.
Step8: Split the X and y into training and test sets.
Step9: Ratio of the size of the training data
Step10: Fit linear regression model
Step11: By looking at the cofficients, we can conclude that R&D Spend has the higest influence on the outcome variable.
Step12: A simpliest prediction model could have been the average. Let's how the model did overall against one feature.
Step13: Compare the root mean squared error (RMSE) of test dataset against the training.
Step14: r2 score can have a max value 1, negative values of R2 means suboptimal model
Step15: On the training the both RMSE and R2 scores perform natually better than those on the test dataset.
Step16: R2 can be viewed as (1 - mse/variance(y))
Step17: p-value indicates the significant scores for each feature. p-value < 0.05 indicates, the corresponding feature is statistically significant. We can rebuild the model excluding the non-significant features one by one until all remaining features are significant.
Step18: Residual plots show there are outliers in the lower end of the y_test values. qqPlot shows that residuals do not exhibit normaality, indicating non linearity in the model.
Step19: Polynomial regression generally sufferes from overfitting. Let's regularize the model using Lasso.
Step20: Let's find cross validation score that accuracy score is more reliable in a sense that it incorporates every piece of is incorporated in both training and testing.
Step21: Encapsulate the steps in a pipeline
Step22: LassoCV helps find the best alpha. We could also use model tuning techqniues to find best alpha as well.
Step23: Look at the cofficients values. Many of the features are not zero making the model parsimonious hence more robust - that is less prone to overfitting.
Step24: From this graph, which alpha values should we select. That question can be answered by looking which alpha values gives the best performance (rmse for example). lassocv function does that for us, or we can use model tuning techniques using grid search - that will be explained later.
|
15,384 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
from pprint import pprint
import shutil
import subprocess
import urllib.request
import h5py
import numpy as np
import matplotlib.pyplot as plt
import openmc.data
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157'
filename, headers = urllib.request.urlretrieve(url, 'gd157.endf')
# Load into memory
gd157_endf = openmc.data.IncidentNeutron.from_endf(filename, covariance=True)
gd157_endf
gd157_endf.resonance_covariance.ranges[0].parameters[:5]
covariance = gd157_endf.resonance_covariance.ranges[0].covariance
plt.imshow(covariance, cmap='seismic',vmin=-0.008, vmax=0.008)
plt.colorbar()
corr = np.zeros([len(covariance),len(covariance)])
for i in range(len(covariance)):
for j in range(len(covariance)):
corr[i, j]=covariance[i, j]/covariance[i, i]**(0.5)/covariance[j, j]**(0.5)
plt.imshow(corr, cmap='seismic',vmin=-1.0, vmax=1.0)
plt.colorbar()
rm_resonance = gd157_endf.resonances.ranges[0]
n_samples = 5
samples = gd157_endf.resonance_covariance.ranges[0].sample(n_samples)
type(samples[0])
print('Sample 1')
samples[0].parameters[:5]
print('Sample 2')
samples[1].parameters[:5]
gd157_endf.resonances.ranges
energy_range = [rm_resonance.energy_min, rm_resonance.energy_max]
energies = np.logspace(np.log10(energy_range[0]),
np.log10(energy_range[1]), 10000)
for sample in samples:
xs = sample.reconstruct(energies)
elastic_xs = xs[2]
plt.loglog(energies, elastic_xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
lower_bound = 2; # inclusive
upper_bound = 2; # inclusive
rm_res_cov_sub = gd157_endf.resonance_covariance.ranges[0].subset('J',[lower_bound,upper_bound])
rm_res_cov_sub.file2res.parameters[:5]
rm_res_cov_sub.covariance
gd157_endf.resonance_covariance.ranges[0].covariance.shape
old_n_parameters = gd157_endf.resonance_covariance.ranges[0].parameters.shape[0]
old_shape = gd157_endf.resonance_covariance.ranges[0].covariance.shape
new_n_parameters = rm_res_cov_sub.file2res.parameters.shape[0]
new_shape = rm_res_cov_sub.covariance.shape
print('Number of parameters\nOriginal: '+str(old_n_parameters)+'\nSubet: '+str(new_n_parameters)+'\nCovariance Size\nOriginal: '+str(old_shape)+'\nSubset: '+str(new_shape))
samples_sub = rm_res_cov_sub.sample(n_samples)
samples_sub[0].parameters[:5]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ENDF
Step2: We can access the parameters contained within File 32 in a similar manner to the File 2 parameters from before.
Step3: The newly created object will contain multiple resonance regions within gd157_endf.resonance_covariance.ranges. We can access the full covariance matrix from File 32 for a given range by
Step4: This covariance matrix currently only stores the upper triangular portion as covariance matrices are symmetric. Plotting the covariance matrix
Step5: The correlation matrix can be constructed using the covariance matrix and also give some insight into the relations among the parameters.
Step6: Sampling and Reconstruction
Step7: The sampling routine requires the incorporation of the openmc.data.ResonanceRange for the same resonance range object. This allows each sample itself to be its own openmc.data.ResonanceRange with a new set of parameters. Looking at some of the sampled parameters below
Step8: We can reconstruct the cross section from the sampled parameters using the reconstruct method of openmc.data.ResonanceRange. For more on reconstruction see the Nuclear Data example notebook.
Step9: Subset Selection
Step10: The subset method will also store the corresponding subset of the covariance matrix
Step11: Checking the size of the new covariance matrix to be sure it was sampled properly
Step12: And finally, we can sample from the subset as well
|
15,385 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import re
import sys
import numpy as np
# Connect to the database backend and initalize a Snorkel session
from lib.init import *
from lib.scoring import *
from lib.lf_factories import *
from snorkel.lf_helpers import test_LF
from snorkel.annotations import load_gold_labels
from snorkel.lf_helpers import (
get_left_tokens, get_right_tokens, get_between_tokens,
get_text_between, get_tagged_text,
)
# initialize our candidate type definition
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
from snorkel.viewer import SentenceNgramViewer
# load our list of training & development candidates
train_cands = session.query(Candidate).filter(Candidate.split == 0).all()
dev_cands = session.query(Candidate).filter(Candidate.split == 1).all()
SentenceNgramViewer(train_cands[0:500], session, n_per_page=1)
marriage = {'husband', 'wife'}
# we'll initialize our LFG and test its coverage on training candidates
LF_marriage = MatchTerms(name='marriage', terms=marriage, label=1, search='between').lf()
# what candidates are covered by this LF?
labeled = coverage(session, LF_marriage, split=0)
# now let's view what this LF labeled
SentenceNgramViewer(labeled, session, n_per_page=1)
tp, fp, tn, fn = error_analysis(session, LF_marriage, split=1, gold=L_gold_dev)
# now let's view what this LF labeled
SentenceNgramViewer(fp, session, n_per_page=1)
other_relationship = {'boyfriend', 'girlfriend'}
LF_other_relationship = MatchTerms(name='other_relationship', terms=other_relationship,
label=-1, search='left', window=1).lf()
labeled = coverage(session, LF_other_relationship, split=1)
# now let's view what this LF labeled
SentenceNgramViewer(labeled, session, n_per_page=1)
exes_rgxs = {' ex[- ](husband|wife)'}
LF_exes = MatchRegex(name='exes', rgxs=exes_rgxs, label=-1, search='between').lf()
labeled = coverage(session, LF_exes, split=1)
# now let's view what this LF labeled
SentenceNgramViewer(labeled, session, n_per_page=1)
from lib.dbpedia import known_spouses
list(known_spouses)[0:5]
LF_distant_supervision = DistantSupervision("dbpedia", kb=known_spouses).lf()
labeled = coverage(session, LF_distant_supervision, split=1)
# score out LF against dev set labels
score(session, LF_distant_supervision, split=1, gold=L_gold_dev)
SentenceNgramViewer(labeled, session, n_per_page=1)
def LF_too_far_apart(c):
Person mentions occur at a distance > 50 words
return -1 if len(list(get_between_tokens(c))) > 50 else 0
def LF_marriage_and_too_far_apart(c):
return 1 if LF_too_far_apart(c) != -1 and LF_marriage(c) == 1 else 0
LF_marriage_and_not_same_person = lambda c: LF_too_far_apart(c) != -1 and LF_marriage(c)
score(session, LF_marriage_and_too_far_apart, split=1, gold=L_gold_dev)
#
# PLACE YOUR LFs HERE
#
LFs = [
# place your lf function variable names here
]
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator()
np.random.seed(1701)
%time L_train = labeler.apply(split=0, lfs=LFs, parallelism=1)
print L_train.shape
%time L_dev = labeler.apply_existing(split=1, lfs=LFs, parallelism=1)
print L_dev.shape
L_train.lf_stats(session)
L_dev.lf_stats(session, labels=L_gold_dev.toarray().ravel())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I. Background
Step2: C. Data Exploration
Step3: D. Labeling Function Metrics
Step4: Viewing Error Buckets
Step5: Other Search Contexts
Step6: 4. Regular Expression Factory
Step7: B. Distant Supervision Labeling Functions
Step9: C. Writing Custom Labeling Functions
Step10: labeled = coverage(session, LF_too_far_apart, split=1)
Step11: VI. Development Sandbox
Step12: B. Applying Labeling Functions
Step13: Then we setup the label annotator class
Step14: 2. Generating the Label Matrix
Step15: 3. Label Matrix Empirical Accuracies
|
15,386 | <ASSISTANT_TASK:>
Python Code:
from cellpy.utils import easyplot
files = [
# "./data/raw/20160805_test001_45_cc_01.res",
# "./data/raw/20160805_test001_45_cc_01_copy.res"# ,
# "./data/20210430_seam10_01_01_cc_01_Channel_48_Wb_1.xlsx.csv# ",
# "./data/20210430_seam10_01_02_cc_01_Channel_49_Wb_1.xlsx.cs# v",
# "20210630_seam13_03_02_cc_# 01",
# "20210630_seam13_03_03_cc# _01", # "20210630_seam13_04_01_c# c_01
# "20210630_seam13_04_02_# cc_01",
# "20210630_seam13_04_03_cc_01",
]
easyplot.help()
ezplt = easyplot.EasyPlot(
files,
None,
cyclelife_plot=True,
cyclelife_percentage=False,
cyclelife_coulombic_efficiency=True,
cyclelife_coulombic_efficiency_ylabel="Coulombic efficiency [%]",
cyclelife_xlabel="Cycles",
cyclelife_ylabel=r"Capacity $\left[\frac{mAh}{g}\right]$",
cyclelife_ylabel_percent="Capacity retention [%]",
cyclelife_legend_outside=True, # if True, the legend is placed outside the plot
galvanostatic_plot=True,
galvanostatic_potlim=(0, 1), # min and max limit on potential-axis
galvanostatic_caplim=None,
galvanostatic_xlabel=r"Capacity $\left[\frac{mAh}{g}\right]$",
galvanostatic_ylabel="Cell potential [V]",
dqdv_plot=True,
dqdv_potlim=None, # min and max limit on potential-axis
dqdv_dqlim=None,
dqdv_xlabel="Cell potential [V]",
dqdv_ylabel=r"dQ/dV $\left[\frac{mAh}{gV}\right]$",
specific_cycles=None, # []
exclude_cycles=[1, 2],
all_in_one=False,
# only_dischg = True,
only_chg=False,
outpath="./ezplots/deleteme/",
figsize=(6, 4), # 6 inches wide, 4 inches tall
figres=100, # Dots per inch
figtitle=None, # None = original filepath
)
ezplt.set_arbin_sql_credentials("localhost", "sa", "Amund1234", "SQL Server")
ezplt.plot()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2
Step2: 3
Step3: 3a
Step4: 4
|
15,387 | <ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
import os
def css_styling():
Load default custom.css file from ipython profile
base = os.getcwd()
styles = "<style>\n%s\n</style>" % (open(os.path.join(base,'files/custom.css'),'r').read())
return HTML(styles)
css_styling()
import numpy as np
import sklearn as sk
import matplotlib.pyplot as plt
import sklearn.datasets as datasets
%matplotlib inline
X, Y = datasets.make_blobs(centers=2)
print("Informacion sobre X:")
print(X.shape)
print(X)
print("Informacion sobre Y:")
print(Y.shape)
print(Y)
plt.scatter(X[:,0], X[:,1], c=Y);
from sklearn.cluster import KMeans
kmeans = KMeans(4)
Y_pred = kmeans.fit(X).labels_
print(Y_pred)
plt.scatter(X[:,0], X[:,1], c=Y_pred);
error=kmeans.score(X,Y)
print("El error es : %f "%error)
plt.scatter(X[:,0], X[:,1], c=Y_pred, alpha=0.4)
mu = kmeans.cluster_centers_
plt.scatter(mu[:,0], mu[:,1], s=100, c=np.unique(Y_pred))
print mu
ks =[ 2,5,8,10,20,40,60,80,100]
error=[]
for k in ks:
kmeans = KMeans(k)
kmeans.fit(X)
error.append(kmeans.score(X,Y))
plt.plot(ks,error,'-o')
plt.show()
from sklearn.cluster import KMeans
from sklearn.utils import shuffle
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
Y = digits.target
#X_digits, _,_, Y_digits = load_digits() # datos MNIST
#X_digits, Y_digits = shuffle(X_digits,Y_digits) # movemos los datos aleatoriamente
#X_digits = X_digits[-5000:] # take only the last instances, to shorten runtime of KMeans
plt.rc("image", cmap="binary") # use black/white palette for plotting
for i in xrange(10):
plt.subplot(2,5,i+1)
plt.imshow(X[i].reshape(28,28))
plt.xticks(())
plt.yticks(())
plt.tight_layout()
kmeans = KMeans(20)
mu_digits = kmeans.fit(X).cluster_centers_
plt.figure(figsize=(16,6))
for i in xrange(2*(mu_digits.shape[0]/2)): # loop over all means
plt.subplot(2,mu_digits.shape[0]/2,i+1)
plt.imshow(mu_digits[i].reshape(32,32))
plt.xticks(())
plt.yticks(())
plt.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
Step2: Introduccion a Machine Learning
Step3: El panorama!
Step4: A visualizar!
Step5: k-means al rescate!
Step6: mas bonito
Step7: podemos cuantificar el error para ver que tal
Step8: y visualizar los centros
Step9: Finalemente vamos a visualizar el error conforme el numero de K's
Step10: Le atinamos?
Step11: vamos a ver que acabamos de lodear
Step12: corremos k-means
Step13: y visualizamos
|
15,388 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
arr = (np.random.rand(100, 50)-0.5) * 50
result = arr.copy()
arr[np.where(result < -10)] = 0
arr[np.where(result >= 15)] = 30
arr[np.logical_and(result >= -10, result < 15)] += 5
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,389 | <ASSISTANT_TASK:>
Python Code:
def countDiv(arr ) :
rem0 = 0
rem1 = 0
rem2 = 0
for i in arr :
digitSum = 0
for digit in str(i ) :
digitSum += int(digit )
if digitSum % 3 == 0 :
rem0 += 1
elif digitSum % 3 == 1 :
rem1 += 1
else :
rem2 += 1
return(rem0 // 2 + min(rem1 , rem2 ) )
arr =[5 , 3 , 2 , 8 , 7 ]
print(countDiv(arr ) )
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,390 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.datasets import load_breast_cancer
import numpy as np
from functools import reduce
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
%timeit
X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(sklearn_ds = load_breast_cancer())
print("Training feature dimensions", X_train.shape, sep = ":\n")
print("\n")
print("Training outcome dimensions", y_train.shape, sep = ":\n")
print("\n")
print("Test feature dimensions", X_test.shape, sep = ":\n")
print("\n")
print("Test outcome dimensions", y_test.shape, sep = ":\n")
print("\n")
print("first 5 rows of the training set features", X_train[:5], sep = ":\n")
print("\n")
print("first 5 rows of the training set outcomes", y_train[:5], sep = ":\n")
X_train.shape[0]
breast_cancer = load_breast_cancer()
breast_cancer.data.shape[0]
# Import our custom utilities
rf.n_estimators
estimator0 = rf.estimators_[0] # First tree
estimator1 = rf.estimators_[1] # Second tree
estimator2 = rf.estimators_[2] # Second tree
tree_dat0 = irf_utils.get_tree_data(X_train = X_train, dtree = estimator0, root_node_id = 0)
tree_dat1 = irf_utils.get_tree_data(X_train = X_train, dtree = estimator1, root_node_id = 0)
tree_dat1 = irf_utils.get_tree_data(X_train = X_train, dtree = estimator2, root_node_id = 0)
# Now plot the trees individually
irf_jupyter_utils.draw_tree(decision_tree = estimator0)
irf_jupyter_utils.pretty_print_dict(inp_dict = tree_dat0)
# Count the number of samples passing through the leaf nodes
sum(tree_dat0['tot_leaf_node_values'])
feature_importances = rf.feature_importances_
std = np.std([dtree.feature_importances_ for dtree in rf.estimators_]
, axis=0)
feature_importances_rank_idx = np.argsort(feature_importances)[::-1]
# Check that the feature importances are standardized to 1
print(sum(feature_importances))
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1
, feature_importances_rank_idx[f]
, feature_importances[feature_importances_rank_idx[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1])
, feature_importances[feature_importances_rank_idx]
, color="r"
, yerr = std[feature_importances_rank_idx], align="center")
plt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)
plt.xlim([-1, X_train.shape[1]])
plt.show()
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
rf.n_classes_
estimator0.n_classes_
type(rf).__name__
rf_metrics = irf_utils.get_validation_metrics(inp_class_reg_obj = rf, y_true = y_test, X_test = X_test)
rf_metrics['confusion_matrix']
# CHECK: If the random forest objects are going to be really large in size
# we could just omit them and only return our custom summary outputs
rf_metrics = irf_utils.get_validation_metrics(inp_class_reg_obj = rf, y_true = y_test, X_test = X_test)
all_rf_outputs = {"rf_obj" : rf,
"feature_importances" : feature_importances,
"feature_importances_rank_idx" : feature_importances_rank_idx,
"rf_metrics" : rf_metrics}
# CHECK: The following should be paralellized!
# CHECK: Whether we can maintain X_train correctly as required
for idx, dtree in enumerate(rf.estimators_):
dtree_out = irf_utils.get_tree_data(X_train = X_train, dtree = dtree, root_node_id = 0)
# Append output to dictionary
all_rf_outputs["dtree" + str(idx)] = dtree_out
estimator0_out = irf_utils.get_tree_data(X_train=X_train,
dtree=estimator0,
root_node_id=0)
print(estimator0_out['all_leaf_nodes'])
print(estimator0_out['all_leaf_nodes'])
print(sum(estimator0_out['tot_leaf_node_values']))
print(estimator0_out['tot_leaf_node_values'])
print(estimator0_out['all_leaf_node_samples'])
print(estimator0.tree_.n_node_samples[0])
print([round(i, 1) for i in estimator0_out['all_leaf_node_samples_percent']])
print(sum(estimator0_out['all_leaf_node_samples_percent']))
irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_outputs)
irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_outputs['rf_metrics'])
all_rf_outputs['dtree0']
uniq_feature_paths = all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']
leaf_node_classes = all_rf_outputs['dtree0']['all_leaf_node_classes']
ones_only = [i for i, j in zip(uniq_feature_paths, leaf_node_classes)
if j == 1]
ones_only
print("Number of leaf nodes", len(all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']), sep = ":\n")
print("Number of leaf nodes with 1 class", len(ones_only), sep = ":\n")
# Just pick the last seven cases, we are going to manually construct
# binary RIT of depth 3 i.e. max 2**3 -1 = 7 intersecting nodes
ones_only_seven = ones_only[-7:]
ones_only_seven
# Construct a binary version of the RIT manually!
# This should come in useful for unit tests!
node0 = ones_only_seven[-1]
node1 = np.intersect1d(node0, ones_only_seven[-2])
node2 = np.intersect1d(node1, ones_only_seven[-3])
node3 = np.intersect1d(node1, ones_only_seven[-4])
node4 = np.intersect1d(node0, ones_only_seven[-5])
node5 = np.intersect1d(node4, ones_only_seven[-6])
node6 = np.intersect1d(node4, ones_only_seven[-7])
intersected_nodes_seven = [node0, node1, node2, node3, node4, node5, node6]
for idx, node in enumerate(intersected_nodes_seven):
print("node" + str(idx), node)
rit_output = reduce(np.union1d, (node2, node3, node5, node6))
rit_output
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
raw_data = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
raw_data.data, raw_data.target, train_size=0.9,
random_state=2017)
rf = RandomForestClassifier(
n_estimators=3, random_state=2018)
rf.fit(X=X_train, y=y_train)
estimator0 = rf.estimators_[0]
estimator0_out = irf_utils.get_tree_data(X_train=X_train,
dtree=estimator0,
root_node_id=0)
print(estimator0_out['all_leaf_nodes'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Check out the data
Step3: Step 2
Step4: Design the single function to get the key tree information
Step5: Decision Tree 0 (First) - Get output
Step6: Step 3
Step7: Step 3.2 Display Feature Importances Graphically (just for interest)
Step8: Putting it all together
Step9: Examine Individual Decision Tree Output
Step10: Check the final dictionary of outputs
Step11: Now we can start setting up the RIT class
Step12: Get the leaf node 1's paths
|
15,391 | <ASSISTANT_TASK:>
Python Code:
# Import some libraries
import numpy as np
import math
from test_helper import Test
from scipy import sparse
# Define data file
ratingsFilename = 'u.data'
f = open(ratingsFilename)
rawRatings = f.readlines()
# Check file format
print rawRatings[:10]
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
formatRatings = # FILL IN
print formatRatings[:10]
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(formatRatings[0], ['196', '242', '3'], 'incorrect result: data are incorrectly formatted')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 1. Extract from the formatted data the list of users ids, item ids and the ratings.
# Check zip(*) function to unzip the formatRatings variable
[user, item, rating] = # <FILL IN>
# 2. Convert the elements of each list to integer values and build an numpy array with the resulting list
# Due to user, movie and ratign are python lists, you will have to apply the int() operator element by element,
# check the list comprehension syntaxis to speed up this
user = # <FILL IN>
item = # <FILL IN>
rating = # <FILL IN>
# 3. Calculate the number of users, item and ratings
number_users = # <FILL IN>
number_items = # <FILL IN>
number_ratings = # <FILL IN>
print number_users
print number_items
print number_ratings
# 4. Build the coo matrix (take into account that user and item ids start from 1 and python indexing starts in zero)
rating_matrix = coo_matrix((rating, (user-1,item-1)), shape =(number_users, number_movies)).tocsr()
###########################################################
# TEST CELL
###########################################################
Test.assertEquals((np.sum(user), np.sum(item) , np.sum(rating)), (46248475, 42553013, 352986), 'incorrect result: user, item or rating is incorrect')
Test.assertEquals(np.round(rating_matrix.mean(),2), 0.22, 'incorrect result: user, item or rating is incorrect')
rating_matrix = rating_matrix.tocsr()
%matplotlib inline
from matplotlib import pyplot as plt
imagedata = rating_matrix[:200, :200].toarray()
# Review the method .toarray(), we will use it later
plt.imshow(imagedata, interpolation='nearest')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 1. Compute the number items of rated by each user
n_rat_user = # <FILL IN>
plt.figure(1, figsize=(4, 4))
hist_user = plt.hist(n_rat_user, histtype='bar', rwidth=0.8)
# 2. Compute the number ratings for each item
n_rat_item = # <FILL IN>
plt.figure(2, figsize=(4, 4))
hist_item = plt.hist(n_rat_item, histtype='bar', rwidth=0.8)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(hist_user[0][0], 560, 'incorrect result: histogram over the number of user ratings is incorrect')
Test.assertEquals(hist_item[0][0], 1146, 'incorrect result: histogram over the number of ratings by item is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# From the reduced rating matrix extract the users, items and ratings. Use the sparse.find() method
[users, items, ratings] = #<FILL IN>
# Get the number of ratings
number_ratings = #<FILL IN>
# Compute the number of training ratings as the 75% of the total number of ratings
np.random.seed(0)
n_tr = #<FILL IN>
# Create a permuted range from 0 to the number of ratings
random_pos = #<FILL IN>
# Select the first n_tr positions of random_pos as the training positions,
# and the remaining random_pos indexes and the positions of the testing ratings
pos_tr = #<FILL IN>
pos_test = #<FILL IN>
# Build the training and testing rating matrices
# Create a coo_matrix and, then, convert it to csr format
# Note that the number of users and items has not changed
rating_train = #<FILL IN>
rating_test = #<FILL IN>
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(n_tr, 75000, 'incorrect result: n_tr value is incorrect')
Test.assertEquals(np.sum(random_pos[:5]), 142525, 'incorrect result: random_pos values are incorrect')
Test.assertEquals(np.round(rating_train.mean(),2), 0.17, 'incorrect result: the values of the training rating matrix are incorrect')
Test.assertEquals(np.round(rating_test.mean(),2), 0.06, 'incorrect result: the values of the testing rating matrix are incorrect')
np.random.seed(0)
all_users = np.random.permutation(number_users)
test_users = all_users[:50]
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def mean_recomender(user_id, item_ids, ratings):
Calculate the output of a mean based recommender
Args:
user_id: id of the user to compute its output
item_ids: numpy array with the ids of the items whose rating has to be estimated
ratings: crs matrix with the ratings of all the users to the rated items
Returns:
pred_items: numpy array of dimensions 1 x number of items with the computed predictions for each item.
This prediction is computed as the mean value of the items which the user_id has already rated.
# Compute the number of items in the rating matrix
n_items = #<FILL IN>
# Select the values of the items rated by the user_id
ratings_u = #<FILL IN>
# Compute the mean value of the ratings_u
if ratings_u.shape[0]>0:
mean_rating = #<FILL IN>
else: # Cold start problem (average rating)
mean_rating = 3
# Generate a numpy vector of dimensions 1 x n_items with all their values equal to mean_rating
pred_items = #<FILL IN>
return pred_items
# Testing function mean_recomender()
user_id = 20
item_ids = np.array([8, 0, 100])
pred_mean = mean_recomender(user_id, item_ids, rating_train)
print pred_mean
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(pred_mean.shape, (1,3), 'incorrect result: pred_mean shape is incorrect')
Test.assertEquals(np.round(np.sum(pred_mean),2), 8.3, 'incorrect result: pred_mean values are incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def mode_recomender(user_id, item_ids, ratings):
Calculate the output of a mode based recommender
Args:
user_id: id of the user to compute its output
item_ids: numpy array with the ids of the items whose rating has to be estimated
ratings: crs matrix with the ratings of all the users to the rated items
Returns:
pred_items: numpy array of dimensions 1 x number of items with the computed predictions for each item.
This prediction is computed as the mode value of the items which the user_id has already rated.
# Compute the number of items in the rating matrix
n_items = #<FILL IN>
# Select the values of the items rated by the user_id
ratings_u = #<FILL IN>
# Compute the mean value of the ratings_u
if ratings_u.shape[0]>0:
mode_rating = #<FILL IN>
else: # Cold start problem (average rating)
mode_rating = 3
# Generate a numpy vector of dimensions 1 x n_items with all their values equal to mean_rating
pred_items = #<FILL IN>
return pred_items
# Testing function mode_recomender()
user_id = 20
item_ids = np.array([8, 0, 100])
pred_mode = mode_recomender(user_id, item_ids, rating_train)
print pred_mode
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(pred_mode.shape, (1,3), 'incorrect result: pred_mean shape is incorrect')
Test.assertEquals(np.round(np.sum(pred_mode),2), 9, 'incorrect result: pred_mean values are incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def get_MAE(pred_rating, real_rating):
Calculate the MAE
Args:
pred_rating: crs matrix, with dimensions n_users x n_items, with the predicted ratings.
real_rating: crs matrix, with dimensions n_users x n_items, with the real ratings.
Returns:
MAE: Mean Absolute Error computed over the non-zero entries of real_rating.
# Extract the non-zero positions of real_rating and their values (use sparse.find() method)
[pos_users, pos_items, real_values] = # <FILL IN>
# Extract the predicted values of the non-zero positions
pred_values = # <FILL IN>
# Compute the MAE (check np.absolute method)
MAE = # <FILL IN>
return MAE
###########################################################
# TEST CELL
###########################################################
matrix_1 = sparse.eye(10).tocsr()
matrix_2 = (1.2*sparse.eye(10)).tocsr()
matrix_2[0,0]= 0.4
Test.assertEquals(np.round(get_MAE(matrix_1, matrix_2),2), 0.24, 'incorrect result: MAE value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def get_RMSE(pred_rating, real_rating):
Calculate the RMSE
Args:
pred_rating: crs matrix, with dimensions n_users x n_items, with the predicted ratings.
real_rating: crs matrix, with dimensions n_users x n_items, with the real ratings.
Returns:
RMSE: Root Mean Square Error computed over the non-zero entries of real_rating.
# Extract the non-zero positions of real_rating and their values (use sparse.find() method)
[pos_users, pos_items, real_values] = # <FILL IN>
# Extract the predicted values of the non-zero positions
pred_values = # <FILL IN>
# Compute the RMSE (check np.sqrt and np.square methods)
RMSE = # <FILL IN>
return RMSE
###########################################################
# TEST CELL
###########################################################
matrix_1 = sparse.eye(10).tocsr()
matrix_2 = (1.2*sparse.eye(10)).tocsr()
matrix_2[0,0]= 0.4
Test.assertEquals(np.round(get_RMSE(matrix_1, matrix_2),2), 0.27, 'incorrect result: RMSE value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Compute the number of users and items
n_users, n_items = # <FILL IN>
# Create two empty prediction matrix in crs format
pred_mean_ratings = sparse.lil_matrix((n_users, n_items))
pred_mode_ratings = sparse.lil_matrix((n_users, n_items))
# Work user by user
for u in range(n_users):
# Get, form the test matrix, the item id to be predicted for this user (check .indices attribute of crs matrix)
item_ids = # <FILL IN>
# Get predictions with the mean based baseline for user u
pred_mean_u = # <FILL IN>
# Get predictions with the mode based baseline for user u
pred_mode_u = # <FILL IN>
# Build the prediction matrices
pred_mean_ratings[u,item_ids] = # <FILL IN>
pred_mode_ratings[u,item_ids] = # <FILL IN>
#Compute the error (MAE and RMSE) for each baseline method over the test_users
MAE_mean = # <FILL IN>
RMSE_mean = # <FILL IN>
MAE_mode = # <FILL IN>
RMSE_mode = # <FILL IN>
print 'Mean model ... MAE: %2.2f , RMSE: %2.2f ' % (MAE_mean, RMSE_mean)
print 'Mode model ... MAE: %2.2f , RMSE: %2.2f ' % (MAE_mode, RMSE_mode)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(MAE_mean,2), 0.84, 'incorrect result: MAE value of mean recommeder is incorrect')
Test.assertEquals(np.round(RMSE_mean,2), 1.04, 'incorrect result: RMSE value of mean recommeder is incorrect')
Test.assertEquals(np.round(MAE_mode,2), 0.86, 'incorrect result: MAE value of mode recommeder is incorrect')
Test.assertEquals(np.round(RMSE_mode,2), 1.19, 'incorrect result: RMSE value of mode recommeder is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def compute_Pearson_correlation(ratings, id_u1, id_u2):
Calculate correlation coefficient
Args:
ratings: crs matrix, with dimensions n_users x n_items, with the ratings used to measure similarities.
id_u1: id user 1
id_u2: id user 2
Returns:
corr_value: correlation coefficient
# Get the indexes and values of the items rated by user 1 (use sparse.find() function)
[pos_u1, items_u1, values_u1] = # <FILL IN>
# Get the indexes and values of the items rated by user 2 (use sparse.find() function)
[pos_u2, items_u2, values_u2] = # <FILL IN>
# Get the set of items rated by both users (you can use np.intersect1d() method)
items_intersect = # <FILL IN>
if items_intersect is not None: # If the are common rated items...
# Compute the mean values of all the items rated by user 1 and user 2
m_1 = # <FILL IN>
m_2 = # <FILL IN>
# Get the ratings of users 1 and 2 in items_intersect (you can use .toarray() method)
r_u1 = # <FILL IN>
r_u2 = # <FILL IN>
# Remove their means
r_u1 = # <FILL IN>
r_u2 = # <FILL IN>
# Compute the correlation coefficient
corr_value = # <FILL IN>
# Remove useless dimensions
corr_value =np.squeeze(corr_value)
else: # Else correlation is 0
corr_value = 0
# Checking that the correlation is not NaN (this would happen if the denominatior is 0),
# in this case, set the corrlation coefficient to 0
if math.isnan(corr_value):
corr_value = 0
return corr_value
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(compute_Pearson_correlation(rating_train, 5, 12),2), 0.36, 'incorrect result: correlation value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def user_sim_recommender(user_id, item_ids, ratings):
Compute the recomendations for user_id over the item_ids with a user based collaborative filtering approach
Args:
user_id: id of the user to compute its output
item_ids: numpy array with the ids of the items whose rating has to be estimated
ratings: crs matrix with the ratings of all the users to the rated items
Returns:
pred_items: numpy array of dimensions 1 x number of items with the computed predictions for each item.
# Get number of users
n_users = #<FILL IN>
# Get the number of items in items_id
n_items = #<FILL IN>
# Create variables to save incremental versions of numerator and denominator
rating_w_acc = np.zeros((1,n_items)) # Numerator (for each item there is a value)
sim_acc = 0 # Denominator
# Build a reduced matrix of ratings with only the columns corresponding to item_ids
ratings_items = #<FILL IN>
# Now we move user by user and compute the corresponding term of the numerator and denominator
for id_u in range(n_users):
# Compute the similarity of user_id with id_u
sim = #<FILL IN>
# If there is similarity ...
if sim>0:
# Get items rated by id_u, among item_ids, and their values
# (use sparse.find() over the row id_u of ratings_items )
[idx_users, pos_ratings_u, ratings_u] = #<FILL IN>
# If id_u has rated items among items_id ...
if pos_ratings_u.shape[0]>0:
# Get the mean value of all the items rated by id_u
mean_id_u = #<FILL IN>
# Update numerator (add term sim*(ratings_u-mean_id_u))
rating_w_acc[:,pos_ratings_u] = rating_w_acc[:,pos_ratings_u] + #<FILL IN>
# Update denominator (add sim)
sim_acc = sim_acc + #<FILL IN>
# Now, that all the terms of numerator and denominator are computed, calculate the predicted values
# 1. Get the mean value of all the items rated by user_id
mean_id_user = #<FILL IN>
# 2. Get predictions
# If this user has similar users (sim_acc>0)...
if sim_acc >0:
# Get predictions with general expresion
pred_items = #<FILL IN>
else: # else (cold start problem)...
# Give predictions as mean value (mean_id_user)
pred_items = #<FILL IN>
return pred_items
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(np.sum(user_sim_recommender(20, np.array([2, 5, 8]), rating_train))), 9, 'incorrect result: correlation value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
print 'Please, be patient, this computation takes a while... '
# Compute the number of users and items
n_users, n_items = # <FILL IN>
# Create an empty prediction matrix in crs format
pred_ratings = sparse.lil_matrix((n_users, n_items))
# Work user by user
for u in test_users:
# Get, form the test matrix, the item id to be predicted for this user (check .indices attribute of crs matrix)
item_ids = # <FILL IN>
# Get predictions with the used based CF method for user u
pred_u = # <FILL IN>
# Build the prediction matrix
pred_ratings[u,item_ids] = # <FILL IN>
# Compute the error (MAE and RMSE) over test_users
MAE = # <FILL IN>
RMSE = # <FILL IN>
print 'MAE: %2.2f , RMSE: %2.2f ' %(MAE, RMSE)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(MAE,2), 0.82, 'incorrect result: MAE value is incorrect')
Test.assertEquals(np.round(RMSE,2), 1.02, 'incorrect result: RMSE value is incorrect')
moviesFilename = 'u.item'
f = open(moviesFilename)
rawMovies = f.readlines()
print rawMovies[:5]
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
formatMovies = #<FILL IN>
print formatMovies[:5]
###########################################################
# TEST CELL
###########################################################
Test.assertEquals((len(formatMovies), len(formatMovies[0])), (1682,2), 'incorrect result: formatMovies dimensions are incorrect')
Test.assertEquals(formatMovies[10], ['11', 'Seven (Se7en) (1995)'], 'incorrect result: formatMovies content is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 1. Compute the predictions, over all the items, that the user based system would provide for the user with id 20
user_id = 20
# Define the list of items as a list with all the item ids
item_ids = #<FILL IN>
# Get the predicitions (use user_sim_recommender() function)
list_pred = #<FILL IN>
# Remove useless dimensions of list_pred
list_pred = np.squeeze(list_pred)
# 2. Sort the list of predicted ratings, placing the highest ratings at the first
pos_ord = #<FILL IN>
# 3. Print the film titles with the ten highest ratings
for i in range(10):
# Get the id of the movie sorted at position i
id_movie = #<FILL IN>
print '%d: %s with rating %2.2f' %(i+1, formatMovies[id_movie][1], list_pred[id_movie])
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(sum(list_pred[:10]),2), 37.63, 'incorrect result: list_pred is incorrect')
Test.assertEquals(sum(pos_ord[:10]), 1579, 'incorrect result: pos_ord is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.linear_model import Ridge
def train_ALS(ratings, lambda_, n_factors):
Compute the latent factors of the ALS algorithm
Args:
ratings: crs matrix with the ratings of all the users to the rated items
lambda_: regularization parameter
n_factors: number of latent factors
Returns:
X, Y: latent factor matrices of users and items
# Parameters
n_iterations = 20
# Get the number of users and items
n_users, n_items = # <FILL IN>
# Random initialization of latent factors
np.random.seed(0)
X = 5 * np.random.rand(n_users, n_factors)
Y = 5 * np.random.rand(n_factors, n_items)
# Define the classifier
clf = Ridge(alpha=lambda_, fit_intercept=False, max_iter=100,tol=0.01)
for ii in range(n_iterations):
for u in range(n_users):
# From ratings matrix get the rated items by user u (use toarray() method)
# Use np.squeeze to remove useless dimensions of r_u
r_u = # <FILL IN>
# Let's create an index matrix indicating the positions where there is or there isn't a rating
w_u = # <FILL IN>
# Solve the optimization problem
# Find X_u to minimize (w_u*(r_u-X[u,:]*Y)^2)
clf.fit(Y.T, r_u.T,w_u.T)
# Get the coefficients computed by the model and add it to the latent factor matrix
X[u,:] = # <FILL IN>
for i in range(n_items):
# From ratings matrix get the rating corresponding to item i (use toarray() method)
# Use np.squeeze to remove useless dimensions of r_i
r_i = # <FILL IN>
# Let's create an index matrix indicating the positions where there is or there isn't a rating
w_i = # <FILL IN>
# Solve the optimization problem
# Find Y_i to minimize (w_i*(r_i-X*Y[i,:])^2)
clf.fit(X, r_i,w_i)
# Get the coefficients computed by the model and add it to the latent factor matrix
Y[:,i] = # <FILL IN>
# To analyze error evolution
# Get predictions (use np.dot to multiply latent factor matrices)
pred_ratings = # <FILL IN>
# Compute the error (MAE and RMSE)
MAE = # <FILL IN>
RMSE = # <FILL IN>
print 'Iteration: %d, MAE: %2.2f , RMSE: %2.2f ' % (ii, MAE, RMSE)
return X, Y
# Test the ALS funtion
# parameters
lambda_ = 10
n_factors = 10
# Train the ALS model
X, Y = train_ALS(rating_train, lambda_, n_factors)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(np.mean(X),2), 0.13, 'incorrect result: X values are incorrect')
Test.assertEquals(np.round(np.mean(Y),2), 0.08, 'incorrect result: Y values are incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Get predictions (use np.dot to multiply latent factor matrices)
pred_ratings = # <FILL IN>
# Compute the error (MAE and RMSE) over test_users
MAE = # <FILL IN>
RMSE = # <FILL IN>
print 'MAE: %2.2f , RMSE: %2.2f ' % (MAE, RMSE)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(MAE,2), 0.78, 'incorrect result: MAE value is incorrect')
Test.assertEquals(np.round(RMSE,2), 1, 'incorrect result: RMSE value is incorrect')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Formatting the data
Step2: Create rating matrix
Step3: Finally, let's transform the rating matrix into a Compressed Sparse Row (CSR) format, since we will have to acces to the users values (read rows), so this format facilitates this type of access.
Step4: Data visualization
Step5: The above image also shows that
Step6: Creating training and test rating matrices
Step7: To speed up the evaluations of the recommender systems, let's create a reduced set with 50 testing users. Of course, you can later extend the method evaluation to all the users, although the evaluation of some methods can take several minutes.
Step9: Baseline recommenders
Step11: 2. Mode based baseline
Step14: Performance evaluation
Step15: Now, let's evaluate the performance of the mean and mode based baselines.
Step17: User based recommendations
Step19: Step 2
Step20: Now, let's evaluate the performance of this recommender over all the users
Step21: Advance work
Step22: Each line in the dataset is formatted as
Step23: Once we have the movie information, let's analyze the set of ten movies that we would recommend to the user with 20. Complete the following cell, following the instructions...
Step25: Alternative Least Squares algorithm (optional exercise)
Step26: Now, let's compute the error over the test data
|
15,392 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.neighbors.kde import KernelDensity
%matplotlib inline
data = pd.read_csv("../data/glass.csv", index_col=False,names=["class"] + list(range(8)))
data_features = [x for x in range(8)]
classes = np.unique(data["class"])
data.head()
# data = pd.read_hdf('../data/sdss.h5', 'sdss')
# data.head()
# target_col = 'class'
# data_features = ['psfMag_r_w14', 'psf_u_g_w14', 'psf_g_r_w14', 'psf_r_i_w14',
# 'psf_i_z_w14', 'petroMag_r_w14', 'petro_u_g_w14', 'petro_g_r_w14',
# 'petro_r_i_w14', 'petro_i_z_w14', 'petroRad_r']
#h = 1/np.sqrt(0.02) # Bandwidth coming from Alasdair's SVM experiments
def percentile_pairwise_distance(X, Y=None):
if Y is None: Y = X
distances = metrics.pairwise_distances(X, Y)
return np.percentile(distances, 20)
h = percentile_pairwise_distance(data[data_features].values)
print("Bandwidth:", h)
num_data = len(data)
idx_all = np.random.permutation(num_data)
num_train = int(np.floor(0.7*num_data))
idx_train = idx_all[:num_train]
idx_test = idx_all[num_train:]
kde = KernelDensity(kernel='gaussian', bandwidth=h, rtol=1e-5)
Xtrain = data[data_features].ix[idx_train]
kde.fit(Xtrain)
Xtest = data[data_features].ix[idx_test]
pred = kde.score_samples(Xtest)
_ = plt.hist(pred, bins=50)
idx_sort = np.argsort(pred)
pred[idx_sort[:10]]
idx_outlier = idx_test[np.where(pred < -7)]
data.ix[idx_outlier]
densities = {}
for cl in classes:
Xtrain_cl = Xtrain[data["class"]==cl]
densities[cl] = KernelDensity(kernel='gaussian', bandwidth=h, rtol=1e-5)
densities[cl].fit(Xtrain_cl)
class_pred = {}
for cl in classes:
class_pred[cl] = densities[cl].score_samples(Xtest)
class_pred[cl] -= pred
fig = plt.figure(figsize=(16,10))
ax = fig.add_subplot(231)
_ = ax.hist(class_pred[1], 30)
ax = fig.add_subplot(232)
_ = ax.hist(class_pred[2], 30)
ax = fig.add_subplot(233)
_ = ax.hist(class_pred[3], 30)
ax = fig.add_subplot(234)
_ = ax.hist(class_pred[5], 30)
ax = fig.add_subplot(235)
_ = ax.hist(class_pred[6], 30)
ax = fig.add_subplot(236)
_ = ax.hist(class_pred[7], 30)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load sample glass data.
Step2: Read SDSS data, preprocessed by colour indices and reddenning correction
Step3: Use the same features as reported in Alasdair Tran's Honours thesis 2015.
Step4: Bandwidth Selection
Step5: (TODO) Define the training, validation, and test sets, and select appropriate Gaussian kernel bandwidth. Use sklearn's grid search to find a good bandwidth.
Step6: Estimate a kernel density estimator on the training set
Step7: Use the fitted density to estimate the log density for all items in the test set
Step8: Choose an appropriate threshold for identifying outliers
Step9: Identify the outliers in the dataset. (TODO) Export or visualise appropriately for getting feedback from the astronomers.
Step10: Calculate class-specific densities
Step11: Discussion
|
15,393 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML
url = (
"https://cordc.ucsd.edu/projects/mapping/maps/fullpage.php?"
"ll=29.061888,-87.373643&"
"zm=7&"
"mt=&"
"rng=0.00,50.00&"
"us=1&"
"cs=4&"
"res=6km_h&"
"ol=3&"
"cp=1"
)
iframe = (
'<iframe src="{src}" width="750" height="450" style="border:none;"></iframe>'.format
)
HTML(iframe(src=url))
import xarray as xr
url = (
"http://hfrnet-tds.ucsd.edu/thredds/dodsC/HFR/USEGC/6km/hourly/RTV/"
"HFRADAR_US_East_and_Gulf_Coast_6km_Resolution_Hourly_RTV_best.ncd"
)
ds = xr.open_dataset(url)
ds
dx = dy = 2.25 # Area around the point of interest.
center = -87.373643, 29.061888 # Point of interest.
dsw = ds.sel(time=slice("2017-07-20", "2017-07-27"))
dsw = dsw.sel(
lon=(dsw.lon < center[0] + dx) & (dsw.lon > center[0] - dx),
lat=(dsw.lat < center[1] + dy) & (dsw.lat > center[1] - dy),
)
resampled = dsw.resample(indexer={"time": "1H"})
avg = resampled.mean(dim="time")
import numpy.ma as ma
v = avg["v"].data
u = avg["u"].data
time = avg["time"].to_index().to_pydatetime()
u = ma.masked_invalid(u)
v = ma.masked_invalid(v)
i, j, k = u.shape
u = u.reshape(i, j * k).mean(axis=1)
v = v.reshape(i, j * k).mean(axis=1)
%matplotlib inline
import matplotlib.pyplot as plt
from oceans.plotting import stick_plot
fig, ax = plt.subplots(figsize=(11, 2.75))
q = stick_plot(time, u, v, ax=ax)
ref = 0.5
qk = plt.quiverkey(
q,
0.1,
0.85,
ref,
"{} {}".format(ref, ds["u"].units),
labelpos="N",
coordinates="axes",
)
_ = plt.xticks(rotation=70)
from datetime import date, timedelta
yesterday = date.today() - timedelta(days=1)
dsy = ds.sel(time=yesterday)
u = dsy["u"].data
v = dsy["v"].data
lon = dsy.coords["lon"].data
lat = dsy.coords["lat"].data
time = dsy.coords["time"].data
import numpy as np
from oceans.ocfis import spdir2uv, uv2spdir
angle, speed = uv2spdir(u, v)
us, vs = spdir2uv(np.ones_like(speed), angle, deg=True)
import cartopy.crs as ccrs
from cartopy import feature
from cartopy.mpl.gridliner import LATITUDE_FORMATTER, LONGITUDE_FORMATTER
LAND = feature.NaturalEarthFeature(
"physical", "land", "10m", edgecolor="face", facecolor="lightgray"
)
sub = 2
bbox = lon.min(), lon.max(), lat.min(), lat.max()
fig, ax = plt.subplots(figsize=(9, 9), subplot_kw=dict(projection=ccrs.PlateCarree()))
ax.set_extent([center[0] - dx - dx, center[0] + dx, center[1] - dy, center[1] + dy])
vmin, vmax = np.nanmin(speed[::sub, ::sub]), np.nanmax(speed[::sub, ::sub])
speed_clipped = np.clip(speed[::sub, ::sub], 0, 0.65)
ax.quiver(
lon[::sub], lat[::sub], us[::sub, ::sub], vs[::sub, ::sub], speed_clipped, scale=30,
)
# Deepwater Horizon site.
ax.plot(-88.365997, 28.736628, marker="o", color="crimson")
gl = ax.gridlines(draw_labels=True)
gl.xlabels_top = gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
feature = ax.add_feature(LAND, zorder=0, edgecolor="black")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The interactive interface is handy for exploration but we usually need to download "mechanically" in order to use them in our analysis, plots, or for downloading time-series.
Step2: How about extracting a week time-series from the dataset averaged around the area of interest?
Step3: With xarray we can average hourly (resample) the whole dataset with one method call.
Step4: Now all we have to do is mask the missing data with NaNs and average over the area.
Step5: To close this post let's us reproduce the HF radar DAC image from above but using yesterday's data.
Step6: Now that we singled out the date and and time we want the data, we trigger the download by accessing the data with xarray's .data property.
Step7: The cell below computes the speed from the velocity. We can use the speed computation to color code the vectors. Note that we re-create the vector velocity preserving the direction but using intensity of 1. (The same visualization technique used in the HF radar DAC.)
Step8: Now we can create a matplotlib figure displaying the data.
|
15,394 | <ASSISTANT_TASK:>
Python Code:
import datetime as dt
import gurobipy as gbp
import IPython.display as IPd
import os
import platform
import pysal as ps
import sys
import time
# Local path on user's machine
path = '/Users/jgaboardi/SHRUG__15/'
print dir(ps)
print dir(ps.Network)
print dir(gbp)
# Manual Conceptual Model
My_Manual = IPd.Image(path+'/Manual_Diagram.png')
My_Manual
# Automated Conceptual Model
My_Auto = IPd.Image(path+'/Auto_Diagram.png')
My_Auto
from collections import OrderedDict
import networkx as nx
import numpy as np
import shapefile as shp
%pylab inline
print '********************************************************'
print ' | Platform Specs: |'
print ' | | OS X v', platform.mac_ver()[0],' |'
print ' | | Processor: ', platform.processor(), ' |'
print ' | | Machine: ', platform.machine(), ' |'
print ' | | Python: ', platform.python_version(), ' |'
print ' | | PySAL: ', ps.version, ' |'
print ' | | Gurobi: ', gbp.gurobi.version(),' |'
print '********************************************************'
print ' | | Date/Time --------- ', dt.datetime.now(), '|'
print '********************************************************'
# Instanitate network of Waverly Hills
ntw = ps.Network(path+'Waverly/Waverly.shp')
# Avon Circle
Avon_Cir = IPd.Image(path+'/Avon.jpg')
Avon_Cir
# Millstream Road
Millstream_Rd = IPd.Image(path+'/Millstream.jpg')
Millstream_Rd
# Roads and Nodes
g = nx.Graph()
# Graph of Roads and Nodes
g1 = nx.MultiGraph()
# Clients
GRAPH_client = nx.Graph()
# Snapped Clients
g_client = nx.Graph()
# Service
GRAPH_service = nx.Graph()
# Snapped Service
g_service = nx.Graph()
## Optimized Locations
# Set Cover
setcover_g = nx.Graph()
# p-Median
median_g = nx.Graph()
# p-Center
center_g = nx.Graph()
# p-Center Manual
center_g_man = nx.Graph()
# Instantiate the Waverly Hills neighborhood shapefile
shp_W = ps.open(path+'Waverly/Waverly.shp')
# Create a bounding box of the shapefile
shp_W.bbox
# Client latitude
np.random.seed(850)
lat_client = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 100)
np.random.seed(352)
# Client longitude
lon_client = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 100)
np.random.seed(904)
# Service latitude
lat_service = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 15)
np.random.seed(407)
# Service longitude
lon_service = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 15)
# Client lat/lon coordinates
rand_coords_client = map(list, zip(lat_client, lon_client))
# Service lat/lon coordinates
rand_coords_service = map(list, zip(lat_service, lon_service))
# Empty Clients dictionary
points_client = {}
# Empty Service dictionary
points_service = {}
# CLIENT with {id: [lon, lat], } dictionary format
for idx, coords in enumerate(rand_coords_client):
GRAPH_client.add_node(idx)
points_client[idx] = coords
GRAPH_client.node[idx] = coords
# SERVICE with {id: [lon, lat], } dictionary format
for idx, coords in enumerate(rand_coords_service):
GRAPH_service.add_node(idx)
points_service[idx] = coords
GRAPH_service.node[idx] = coords
print dt.datetime.now()
#Instantiate Figure
figsize(10,11)
#Draw Graph of Actual Nodes and Roads
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
#Draw only unique edges in graph
for e in ntw.graphedges:
g1.add_edge(*e)
# highlights cases where start and end node are the same
if e[0]==e[1]:
g1.add_node(e[0])
for node_id in g1.node:
g1.node[node_id] = ntw.node_coords[node_id]
nx.draw(g1, ntw.node_coords, node_size=20, alpha=0.5)
# Draw Graph of Random Client Points
nx.draw(GRAPH_client, points_client,
node_size=75, alpha=1, node_color='b')
# Draw Graph of Random Service Points
nx.draw(GRAPH_service, points_service,
node_size=75, alpha=1, node_color='c')
# Legend (Ordered Dictionary) from collections
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Graph Vertices']=g1
LEGEND['Graph Edges']=g1
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND,
loc='lower right',
fancybox=True,
framealpha=0.5,
scatterpoints=1)
# Title
title('Waverly Hills\nTallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.49, 0.0, 0.005, width=.0003, head_width=0.001,
head_length=0.0015, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.498), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
print '\nNodes in original network: ', len(ntw.nodes)
print 'Edges in original network: ', len(ntw.edges)
print '\nNodes in simplified network: ', len(g1.node)
print 'Edges in simplified network: ', len(ntw.graphedges)
# List of coords by key
LC = []
for i,j in g1.edges():
if i in g1.node and j in g1.node:
x=[list(g1.node[i]), list(g1.node[j])]
x = list(x)
LC.append(x)
lc = [LC]
SimpGraph = shp.Writer(shp.POLYLINE)
# Add Edges [k] in list of edges by end node coord
for k in lc:
SimpGraph.poly(shapeType=shp.POLYLINE, parts=k)
# Add Fields
SimpGraph.field('Graph_ID')
counter = 0
for i in range(len(g1.node)):
counter = counter + 1
SimpGraph.record(counter)
# Save Shapefile
SimpGraph.save(path+'Waverly/Simplified_Waverly.shp')
# Client Weights for demand
np.random.seed(850)
Ai = np.random.randint(1, 5, len(rand_coords_client))
Ai = Ai.reshape(len(Ai),1)
# Sum of Weights (Total Demand)
AiSum = np.sum(Ai)
# Client Shapefile
client = shp.Writer(shp.POINT)
# Add Random Points
for i,j in rand_coords_client:
client.point(i,j)
# Add Fields
client.field('client_ID')
client.field('Weight')
client.field('LAT')
client.field('LON')
counter = 0
for i in range(len(rand_coords_client)):
counter = counter + 1
client.record('client_' + str(counter), Ai[i], lat_client[i], lon_client[i])
# Save Shapefile
client.save(path+'Simulated/RandomPoints_CLIENT')
#Service Shapefile
service = shp.Writer(shp.POINT)
# Add Random Points
for i,j in rand_coords_service:
service.point(i,j)
# Add Fields
service.field('y_ID')
service.field('x_ID')
service.field('LAT')
service.field('LON')
counter = 0
for i in range(len(rand_coords_service)):
counter = counter + 1
service.record('y' + str(counter), 'x' + str(counter), lat_service[i], lon_service[i])
# Save Shapefile
service.save(path+'Simulated/RandomPoints_SERVICE')
SimNet = ps.Network(path+'Waverly/Simplified_Waverly.shp')
t1 = time.time()
Snap_C = ntw.snapobservations(path+'Simulated/RandomPoints_CLIENT.shp',
'Rand_Points_CLIENT', attribute=True)
Snap_S = ntw.snapobservations(path+'Simulated/RandomPoints_SERVICE.shp',
'Rand_Points_SERVICE', attribute=True)
print round(time.time()-t1, 4), 'seconds'
t1 = time.time()
Snap_C = SimNet.snapobservations(path+'Simulated/RandomPoints_CLIENT.shp',
'Rand_Points_CLIENT', attribute=True)
Snap_S = SimNet.snapobservations(path+'Simulated/RandomPoints_SERVICE.shp',
'Rand_Points_SERVICE', attribute=True)
print round(time.time()-t1, 4), 'seconds'
# Instantiate Figure
figsize(10,11)
# Draw Graph of Roads
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Draw Graph of Snapped Client Nodes
g_client = nx.Graph()
for p,coords in ntw.pointpatterns['Rand_Points_CLIENT'].snapped_coordinates.iteritems():
g_client.add_node(p)
g_client.node[p] = coords
nx.draw(g_client, ntw.pointpatterns['Rand_Points_CLIENT'].snapped_coordinates,
node_size=75, alpha=1, node_color='b')
# Draw Graph of Snapped Service Nodes
g_service = nx.Graph()
for p,coords in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():
g_service.add_node(p)
g_service.node[p] = coords
nx.draw(g_service, ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates,
node_size=75, alpha=1, node_color='c')
# Draw Graph of Random Client Points
nx.draw(GRAPH_client, points_client,
node_size=20, alpha=1, node_color='y')
# Draw Graph of Random Service Points
nx.draw(GRAPH_service, points_service,
node_size=20, alpha=1, node_color='w')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Snapped Client']=g_client
LEGEND['Snapped Service']=g_service
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND, loc='lower right', fancybox=True, framealpha=0.5, scatterpoints=1)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.49, 0.0, 0.005, width=.0003, head_width=0.001,
head_length=0.0015, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.498), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
# Create Lat & Lon dictionaries of the snapped service locations
lat_snapped = []
lon_snapped = []
for i,j in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():
lat_snapped.append(j[0])
lon_snapped.append(j[1])
# Snapped Service Shapefile
service_SNAP = shp.Writer(shp.POINT)
# Add Points
for i,j in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():
service_SNAP.point(j[0],j[1])
# Add Fields
service_SNAP.field('y_ID')
service_SNAP.field('x_ID')
service_SNAP.field('LAT')
service_SNAP.field('LON')
counter = 0
for i in range(len(ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates)):
counter = counter + 1
service_SNAP.record('y' + str(counter), 'x' + str(counter), lat_snapped[i], lon_snapped[i])
# Save Shapefile
service_SNAP.save(path+'Snapped/SERVICE_Snapped')
t1 = time.time()
# Define Client to Service Matrix Function
def c_s_matrix():
global All_Dist_MILES
All_Neigh_Dist = SimNet.allneighbordistances(sourcepattern=SimNet.pointpatterns['Rand_Points_CLIENT'],
destpattern=SimNet.pointpatterns['Rand_Points_SERVICE'])
All_Dist_MILES = All_Neigh_Dist * float(10000/90) * 0.6214
# Call Client to Service Matrix Function
c_s_matrix()
seconds = round(time.time()-t1, 4)
print seconds, 'seconds'
print 'Client to Service Matrix Shape --> ', All_Dist_MILES.shape
# Set Parameters
gbp.setParam('MIPFocus', 2) # Set MIP focus to 'Optimal' --> 2
gbp.setParam('MIPGapAbs', 0) # Set Absolute MIP Gap --> 0
gbp.setParam('GomoryPasses', 0) # Set Number of Gomory Cuts --> 0
gbp.setParam('ZeroHalfCuts', 0) # Set Number of Zero Half Cuts --> 0
gbp.setParam('ImpliedCuts', 0) # Set Number of Implied Cuts --> 0
gbp.setParam('BarConvTol', .000000001) # Set Barrier Convergence Tolerence
gbp.setParam('FeasibilityTol', .000000001) # Set Feasibility Tolerence
gbp.setParam('IntFeasTol', .000000001) # Set Integer Feasibility Tolerence
gbp.setParam('OptimalityTol', .000000001) # Set Optimality Tolerence
gbp.setParam('Method', 4) # Set Algorithm to 'concurrent': Dual Simplex and Barrier
gbp.setParam('DisplayInterval', 1) # Set Display Interval to 1
# Define the Set Cover function
def gbpSCLP():
t1 = time.time()
# Define Global Records Variable
global NEW_Records_SCLP
# 1. Read In Data
# Cost Matrix
Cij = All_Dist_MILES
# Create Aij: Determine Aij (nodes within S)
# S --> 1 = served; 0 = unserved
S = 1.25
# Aij
Aij = []
for i in np.nditer(Cij):
if i <= S:
outtext = 1
else:
outtext = 0
Aij.append(outtext)
rows, cols = Cij.shape
# Transform Aij into an array and resphape to match with Cij
Aij = np.array(Aij)
Aij = Aij.reshape(len(Cij),len(Cij[0]))
client_nodes = range(len(Cij))
service_nodes = range(len(Cij[0]))
# 2. Create Model, Set MIP Focus, Add Variables, & Update Model
mSCLP = gbp.Model(" -- SCLP -- ")
# Add Service Decision Variables (j)
serv_var = []
for dest in service_nodes:
serv_var.append(mSCLP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
name='x'+str(dest+1)))
# Update Model Variables
mSCLP.update()
# 3. Set Objective Function
mSCLP.setObjective(gbp.quicksum(serv_var[dest]
for dest in service_nodes),
gbp.GRB.MINIMIZE)
# 4. Add Constraints
# Add Coverage Constraints
for orig in client_nodes:
mSCLP.addConstr(gbp.quicksum(Aij[orig][dest]*serv_var[dest]
for dest in service_nodes) >= 1)
# 5. Optimize and Print Results
# Solve
try:
mSCLP.optimize()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Write LP
mSCLP.write(path+'LP_Files/WaverlySCLP.lp')
t2 = time.time()-t1
# Record and Display Results
print '\n*****************************************************************************************'
selected = []
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_SCLP = []
for v in mSCLP.getVars():
if v.x > 0:
var = '%s' % v.VarName
selected.append(v.x)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_SCLP.append(x)
else:
pass
print ' | ', var
print ' | Selected Facility Locations ------------------ ^^^^ '
print ' | Coverage (S) in miles ------------------------ ', S
print ' | Client Nodes --------------------------------- ', len(client_nodes)
print ' | Facilities needed 100% coverage of clients --- ', len(selected)
print ' | Real Time to Optimize (sec.) ----------------- ', t2
print ' | Date/Time ------------------------------------ ', dt.datetime.now()
print '*****************************************************************************************'
print ' -- Set Cover Location Problem -- '
# Call SCLP Function
try:
gbpSCLP()
print '\nJames Gaboardi, 2015'
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Define SCLP shapefile function
def Create_SCLP():
try:
# Instantiate SCLP shapefile
SHP_SetCover = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_SCLP:
SHP_SetCover.point(float(x), float(y))
# Add Fields
SHP_SetCover.field('y_ID')
SHP_SetCover.field('x_ID')
SHP_SetCover.field('LAT')
SHP_SetCover.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_SCLP:
SHP_SetCover.record(idy,idx,x,y)
# Save Shapefile
SHP_SetCover.save(path+'Results/Selected_Locations_SetCover')
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Call SCLP shapefile function
try:
Create_SCLP()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# define p-Median function
def gbpPMP():
t1 = time.time()
# Define Global Variables
global Ai
global AiSum
global NEW_Records_PMP
# 1. Data
# Demand
Ai = Ai
# Demand Sum
AiSum = AiSum
# Travel Costs
Cij = All_Dist_MILES
# Weighted Costs
Sij = Ai * Cij
# Total Client and Service nodes
client_nodes = range(len(Sij))
service_nodes = range(len(Sij[0]))
# 2. Create Model, Add Variables, & Update Model
# Instantiate Model
mPMP = gbp.Model(' -- p-Median -- ')
# Add Client Decision Variables (iXj)
client_var = []
for orig in client_nodes:
client_var.append([])
for dest in service_nodes:
client_var[orig].append(mPMP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
obj=Sij[orig][dest],
name='x'+str(orig+1)+'_'+str(dest+1)))
# Add Service Decision Variables (j)
serv_var = []
for dest in service_nodes:
serv_var.append([])
serv_var[dest].append(mPMP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
name='y'+str(dest+1)))
# Update the model
mPMP.update()
# 3. Set Objective Function
mPMP.setObjective(gbp.quicksum(Sij[orig][dest]*client_var[orig][dest]
for orig in client_nodes for dest in service_nodes),
gbp.GRB.MINIMIZE)
# 4. Add Constraints
# Assignment Constraints
for orig in client_nodes:
mPMP.addConstr(gbp.quicksum(client_var[orig][dest]
for dest in service_nodes) == 1)
# Opening Constraints
for orig in service_nodes:
for dest in client_nodes:
mPMP.addConstr((serv_var[orig][0] - client_var[dest][orig] >= 0))
# Facility Constraint
mPMP.addConstr(gbp.quicksum(serv_var[dest][0] for dest in service_nodes) == 2)
# 5. Optimize and Print Results
# Solve
try:
mPMP.optimize()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Write LP
mPMP.write(path+'LP_Files/WaverlyPMP.lp')
t2 = time.time()-t1
# Record and Display Results
print '\n*************************************************************************'
selected = []
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_PMP = []
for v in mPMP.getVars():
if 'x' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected.append(var)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PMP.append(x)
else:
pass
print ' | ', var
print ' | Selected Facility Locations -------------- ^^^^ '
print ' | Candidate Facilities [p] ----------------- ', len(selected)
val = mPMP.objVal
print ' | Objective Value (miles) ------------------ ', val
avg = float(mPMP.objVal)/float(AiSum)
print ' | Avg. Value / Client (miles) -------------- ', avg
print ' | Real Time to Optimize (sec.) ------------- ', t2
print ' | Date/Time -------------------------------- ', dt.datetime.now()
print '*************************************************************************'
print ' -- The p-Median Problem -- '
# Call p-Median Function
try:
gbpPMP()
print '\nJames Gaboardi, 2015'
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# define PMP shapefile function
def Create_PMP():
try:
# Instantiate Shapefile
SHP_Median = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PMP:
SHP_Median.point(float(x), float(y))
# Add Fields
SHP_Median.field('y_ID')
SHP_Median.field('x_ID')
SHP_Median.field('LAT')
SHP_Median.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PMP:
SHP_Median.record(idy,idx,x,y)
# Save Shapefile
SHP_Median.save(path+'Results/Selected_Locations_Pmedian')
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Call PMP shapefile function
try:
Create_PMP()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# define PCP shapefile function
def gbpPCP():
t1 = time.time()
# Define Global Variables
global Cij
global mPCP
global dbf1
global selected_PCP
global NEW_Records_PCP
# 1. Data
Cij = All_Dist_MILES
# Total Client and Service nodes
client_nodes = range(len(Cij))
service_nodes = range(len(Cij[0]))
# 2. Create Model, Add Variables, & Update Model
# Instantiate Model
mPCP = gbp.Model(' -- P-Center -- ')
# Add Client Decision Variables (iXj)
client_var = []
for orig in client_nodes:
client_var.append([])
for dest in service_nodes:
client_var[orig].append(mPCP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
obj=Cij[orig][dest],
name='x'+str(orig+1)+'_'+str(dest+1)))
# Add Service Decision Variables (j)
serv_var = []
for dest in service_nodes:
serv_var.append([])
serv_var[dest].append(mPCP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
name='y'+str(dest+1)))
# Add the Maximum travel cost variable
W = mPCP.addVar(vtype=gbp.GRB.CONTINUOUS,
lb=0.,
name='W')
# Update the model
mPCP.update()
# 3. Set the Objective function
mPCP.setObjective(W, gbp.GRB.MINIMIZE)
# 4. Add Constraints
# Add Assignment Constraints
for orig in client_nodes:
mPCP.addConstr(gbp.quicksum(client_var[orig][dest]
for dest in service_nodes) == 1)
# Add Opening constraints
for orig in service_nodes:
for dest in client_nodes:
mPCP.addConstr((serv_var[orig][0] - client_var[dest][orig] >= 0))
# Add Facility Constraints
mPCP.addConstr(gbp.quicksum(serv_var[dest][0] for dest in service_nodes) == 2)
# Add Maximum travel cost constraints
for orig in client_nodes:
mPCP.addConstr(gbp.quicksum(Cij[orig][dest]*client_var[orig][dest]
for dest in service_nodes) - W <= 0)
# 5. Optimize and Print Results
# Solve
try:
mPCP.optimize()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Write LP
mPCP.write(path+'/LP_Files/WaverlyPCP.lp')
t2 = time.time()-t1
print '\n*************************************************************************'
# Record and Display Results
selected_PCP = []
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_PCP = []
for v in mPCP.getVars():
if 'x' in v.VarName:
pass
elif 'W' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected_PCP.append(var)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PCP.append(x)
else:
pass
print ' | ', var, ' '
print ' | Selected Facility Locations -------------- ^^^^ ', ' '
print ' | Candidate Facilities [p] ----------------- ', len(selected_PCP), ' '
print ' | Objective Value (miles) ------------------ ', mPCP.objVal, ' '
print ' | Real Time to Optimize (sec.) ------------- ', t2
print ' | Date/Time -------------------------------- ', dt.datetime.now()
print '*************************************************************************'
print ' -- The p-Center Problem -- '
# Call p-Center Function
try:
gbpPCP()
print '\nJames Gaboardi, 2015'
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# define PCP shapefile function
def Create_PCP():
try:
# Instantiate Shapefile
SHP_Center = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PCP:
SHP_Center.point(float(x), float(y))
# Add Fields
SHP_Center.field('y_ID')
SHP_Center.field('x_ID')
SHP_Center.field('LAT')
SHP_Center.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PCP:
SHP_Center.record(idy,idx,x,y)
# Save Shapefile
SHP_Center.save(path+'Results/Selected_Locations_Pcenter')
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Call PCP shapefile function
try:
Create_PCP()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# p-Center Facility Location Problem
# This script creates a linear programming file to be read into an optimizer.
'''
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
'''
# Developed by: James D. Gaboardi, MSGIS
# 03/2015
# James Gaboardi
# Terminology & General Background for Facility Location and Summation Notation:
# * The objective of the p-center Facility Location Problem is to minimize the maximum cost
# of travel between service facilities and clients on a network.
# * [i] - a specific origin
# * [j] - a specifc destination
# * [n] - the set of origins
# * [m] - the set of destinations
# * [Cij] - travel costs between nodes
# * [W] - the maximum travel costs between service facilities and clients
# * [x#_#] - the client decision variable
# * [y#] - the service decision variable
# * [p] - the number of facilities to be sited
# DEFINED FUNCTIONS
# Assignment Constraints
def get_assignment_constraints():
outtext = ' '
for i in range(1,rows+1):
temp = ' '
for j in range(1,cols+1):
temp += 'x' + str(i) + '_' + str(j) + ' + '
outtext += temp[:-2] + '= 1\n'
return outtext
# Facility Constraint
def get_p_facilities():
outtext = ''
for i in range(1, cols+1):
temp = ''
temp += 'y' + str(i)
outtext += temp + ' + '
outtext = ' ' + outtext[:-2] + '= 2\n'
return outtext
# Opening Constraints
def get_opening_constraints_p_center():
outtext = ' '
for i in range(1, cols+1):
for j in range(1, rows+1):
outtext += ' - x' + str(j) + '_' + str(i) + ' + ' + 'y' + str(i) + ' >= 0\n'
return outtext
# Maximum Cost Constraints
def get_max_cost():
outtext = ''
for i in range(rows):
temp = ' '
for j in range(cols):
temp += str(Cij[i,j]) + ' x' + str(i+1) + '_' + str(j+1) + ' + '
outtext += temp[:-2] + '- W <= 0\n'
return outtext
# Declaration of Bounds
def get_bounds_allocation():
outtext = ' '
for i in range(rows):
temp = ''
for j in range(cols):
temp += ' 0 <= x' + str(i+1) + '_' + str(j+1) + ' <= 1\n'
outtext += temp
return outtext
def get_bounds_facility():
outtext = ''
for i in range(cols):
outtext += ' 0 <= y' + str(i+1) + ' <= 1\n'
return outtext
# Declaration of Decision Variables (form can be: Binary, Integer, etc.)
def get_decision_variables_p_center():
outtext = ' '
for i in range(1, rows+1):
temp = ''
for j in range(1, cols+1):
temp += 'x' + str(i) + '_' + str(j) + ' '
outtext += temp
return outtext
def get_facility_decision_variables_p_center():
outtext = ''
for i in range (1, cols+1):
outtext += 'y' + str(i) + ' '
return outtext
# DATA READS & VARIABLE DECLARATION
Cij = All_Dist_MILES
rows,cols = Cij.shape
# START TEXT FOR .lp FILE
# Declaration of Objective Function
text = 'Minimize\n'
text += ' obj: W\n'
# Declaration of Constraints
text += 'Subject To\n'
text += get_assignment_constraints()
text += get_p_facilities()
text += get_opening_constraints_p_center()
text += get_max_cost()
# Declaration of Bounds
text += 'Bounds\n'
text += get_bounds_allocation()
text += get_bounds_facility()
# Declaration of Decision Variables form: Binaries
text += 'Binaries\n'
text += get_decision_variables_p_center()
text += get_facility_decision_variables_p_center()
text += '\n'
text += 'End\n'
text += "'''\n"
text += "James Gaboardi, 2015"
# CREATE & WRITE .lp FILE TO DISK
# Fill path name -- File name must not have spaces.
outfile = open(path+'LP_Files/pCenter_Manual.lp', 'w')
outfile.write(text)
outfile.close()
# Define manual LP read PCP Function
def Manual_LP_PCP():
global Cij
global manualPCP
global dbf1
global selected_PCP_manual
global NEW_Records_PCP_Man
t1 = time.time()
# Instantiate Optimization model from .lp file
manualPCP = gbp.read(path+'LP_Files/pCenter_Manual.lp')
# Solve
try:
manualPCP.optimize()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
t2 = time.time()-t1
# Record and Display Results
print '\n*************************************************************************'
selected_PCP_manual = []
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_PCP_Man = []
for v in manualPCP.getVars():
if 'x' in v.VarName:
pass
elif 'W' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected_PCP_manual.append(var)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PCP_Man.append(x)
else:
pass
print ' | ', var, ' '
print ' | Selected Facility Locations -------------- ^^^^ ', ' '
print ' | Candidate Facilities [p] ----------------- ', len(selected_PCP_manual), ' '
print ' | Objective Value (miles) ------------------ ', manualPCP.objVal, ' '
print ' | Real Time to Optimize (sec.) ------------- ', t2
print ' | Date/Time -------------------------------- ', dt.datetime.now()
print '*************************************************************************'
print ' -- The p-Center Problem Manual LP Creation-- '
# Call Function
try:
Manual_LP_PCP()
print '\nJames Gaboardi, 2015'
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# define Manual PCP shapefile function
def Create_PCP_Man():
try:
# Instantiate Shapefile
SHP_Center_Man = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PCP_Man:
SHP_Center_Man.point(float(x), float(y))
# Add Fields
SHP_Center_Man.field('y_ID')
SHP_Center_Man.field('x_ID')
SHP_Center_Man.field('LAT')
SHP_Center_Man.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PCP_Man:
SHP_Center_Man.record(idy,idx,x,y)
# Save Shapefile
SHP_Center_Man.save(path+'Results/Selected_Locations_Pcenter_Man')
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Call Manual PCP shapefile function
try:
Create_PCP_Man()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Define manual LP read PCP Function
def A_to_M_LP_PCP():
global Cij
global A_to_MPCP
global dbf1
global selected_PCP_A_to_M
global NEW_Records_PCP_A_to_M
t1 = time.time()
# Instantiate Optimization model from .lp file
A_to_MPCP = gbp.read(path+'LP_Files/WaverlyPCP.lp')
# Solve
try:
A_to_MPCP.optimize()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
t2 = time.time()-t1
# Record and Display Results
print '\n*************************************************************************'
selected_PCP_A_to_M = []
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_PCP_A_to_M = []
for v in A_to_MPCP.getVars():
if 'x' in v.VarName:
pass
elif 'W' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected_PCP_A_to_M.append(var)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PCP_A_to_M.append(x)
else:
pass
print ' | ', var, ' '
print ' | Selected Facility Locations -------------- ^^^^ ', ' '
print ' | Candidate Facilities [p] ----------------- ', len(selected_PCP_A_to_M), ' '
print ' | Objective Value (miles) ------------------ ', A_to_MPCP.objVal, ' '
print ' | Real Time to Optimize (sec.) ------------- ', t2
print ' | Date/Time -------------------------------- ', dt.datetime.now()
print '*************************************************************************'
print ' -- The p-Center Problem Solved by rereading the Auto-creation LP -- '
# Call thFunction
try:
A_to_M_LP_PCP()
print '\nJames Gaboardi, 2015'
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print exc_type, fname, 'Line Number -- ',exc_tb.tb_lineno
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
print ' \n Automated p-Center'
print ' | Candidate Facilities [p] ----------------- ', len(selected_PCP), ' '
print ' | Selected Facility Locations -------------- ', selected_PCP
print ' | Objective Value (miles) ------------------ ', mPCP.objVal, ' '
print '\n Manual p-Center'
print ' | Candidate Facilities [p] ----------------- ', len(selected_PCP_manual), ' '
print ' | Selected Facility Locations -------------- ', selected_PCP_manual
print ' | Objective Value (miles) ------------------ ', manualPCP.objVal, ' '
print '\n\n | Same Objective Value (str) --------------- ', str(mPCP.ObjVal) == str(manualPCP.ObjVal)
print ' | Same Objective Value (float) ------------ ', mPCP.ObjVal.real == manualPCP.ObjVal.real
print ' | Same Facilities ------------------------- ', selected_PCP == selected_PCP_manual
print '\nIdentical Parameters? ', str(mPCP.Params) == str(manualPCP.Params)
print '\n1: ', ("%.50f" % mPCP.ObjVal)
print '2: ', ("%.50f" % manualPCP.ObjVal)
print '3: ', ("%.50f" % A_to_MPCP.ObjVal)
figsize(10,11)
# Draw Network Actual Roads and Nodes
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Set Cover
SetCover = ps.open(path+'Results/Selected_Locations_SetCover.shp')
points_setcover = {}
for idx, coords in enumerate(SetCover):
setcover_g.add_node(idx)
points_setcover[idx] = coords
setcover_g.node[idx] = coords
nx.draw(setcover_g, points_setcover,
node_size=1400, alpha=1, node_color='g')
# p-Median
P_Med = ps.open(path+'Results/Selected_Locations_Pmedian.shp')
points_median = {}
for idx, coords in enumerate(P_Med):
median_g.add_node(idx)
points_median[idx] = coords
median_g.node[idx] = coords
nx.draw(median_g, points_median,
node_size=1000, alpha=1, node_color='r')
# p-Center
P_Cent = ps.open(path+'Results/Selected_Locations_Pcenter.shp')
points_center = {}
for idx, coords in enumerate(P_Cent):
center_g.add_node(idx)
points_center[idx] = coords
center_g.node[idx] = coords
nx.draw(center_g, points_center,
node_size=700, alpha=1, node_color='b')
# p-Center Manual
P_Cent_Man = ps.open(path+'Results/Selected_Locations_Pcenter_Man.shp')
points_center_man = {}
for idx, coords in enumerate(P_Cent_Man):
center_g_man.add_node(idx)
points_center_man[idx] = coords
center_g_man.node[idx] = coords
nx.draw(center_g_man, points_center_man,
node_size=300, alpha=1, node_color='y', node_shape='d' )
# Draw Graph of Random Client
nx.draw(GRAPH_client, points_client,
node_size=15, alpha=.5, node_color='k')
# Draw Graph of Snapped Service
nx.draw(g_service, ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates,
node_size=50, alpha=1, node_color='k')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Optimal Set Cover (S=1.25)']=setcover_g
LEGEND['Optimal p-Median (p=2)']=median_g
LEGEND['Optimal p-Center (p=2)']=center_g
LEGEND['Optimal p-Center Manual(p=2)']=center_g_man
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Snapped Service Nodes']=g_service
legend(LEGEND,
loc='lower right',
fancybox=True,
framealpha=0.5,
scatterpoints=1)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.49, 0.0, 0.005, width=.0003, head_width=0.001,
head_length=0.0015, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.498), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
IPd.HTML('https://github.com/jGaboardi')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Optimization & GIS
Step2: PySAL.Network
Step3: Gurobi 6.5
Step4: "Traditional" Conceptual Model
Step5: <b>So why do things differently?
Step6: 2. Demonstration
Step7: 2.1 Data preparation and creation
Step8: Loop Road
Step9: <u>Self-intersecting Road</u>
Step10: 2.1.2 Instantiate all graphs to be drawn
Step11: 2.1.3 Create Bounding Box from Waverly.shp
Step12: In order to create simulated demand and service nodes for this toy problem I create a bounding latitude and longitude box from the extremes of the 'Waverly.shp' of the Waverly Hills neighborhood. The total area is roughly 1.5 square miles.
Step13: 2.1.5 Zip the latitude and longitude lists together
Step14: 2.1.6 Create empty random points dictionaries
Step15: 2.1.7 Fill dictionaries of random roints
Step16: 2.1.8 Draw roads, simplified network, and random client & service nodes
Step17: Network Characteristics
Step18: Instantiate simplified network shapefile
Step19: 2.1.9 Create weights at nodes and sum
Step20: 2.1.10 Instantiate client and service shapefiles
Step21: Instantiate Simplified Network
Step22: 2.1.11a Snap observations to ntw
Step23: 2.1.11b Snap observations to SimNet
Step24: 2.1.12 Draw ntw, snapped coords, & random coords
Step25: 2.1.13 Instantiate shapefile of service nodes snapped to ntw
Step26: 2.1.14 Create distance matrices
Step27: 2.2 Mathematical Optimization
Step28: 2.2.1a Set Cover test [S = 1.25 miles]
Step29: 2.2.1b Instantiate selected Set Cover location shapefile
Step30: 2.2.2a p-Median test [p = 2]
Step31: 2.2.2b Instantiate Selected p-median shapefile
Step32: 2.2.3a p-Center test [p = 2]
Step33: 2.2.3b Instantiate selected p-Center shapefile
Step34: 2.2.4a Manual p-Center .lp File Creation
Step35: 2.2.4b Manual p-Center .lp File Solved
Step36: 2.2.4c Instantiate selected manual p-Center shapefile
Step37: 2.2.5 Automated p-Center .lp File Solved
Step38: 2.2.4d p-Center Results compared
Step39: 2.3 Selected locations
Step40: 3. Next Steps
|
15,395 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q -U tensorflow-text
import tensorflow as tf
import tensorflow_text as text
import functools
examples = {
"text_a": [
b"Sponge bob Squarepants is an Avenger",
b"Marvel Avengers"
],
"text_b": [
b"Barack Obama is the President.",
b"President is the highest office"
],
}
dataset = tf.data.Dataset.from_tensor_slices(examples)
next(iter(dataset))
_VOCAB = [
# Special tokens
b"[UNK]", b"[MASK]", b"[RANDOM]", b"[CLS]", b"[SEP]",
# Suffixes
b"##ack", b"##ama", b"##ger", b"##gers", b"##onge", b"##pants", b"##uare",
b"##vel", b"##ven", b"an", b"A", b"Bar", b"Hates", b"Mar", b"Ob",
b"Patrick", b"President", b"Sp", b"Sq", b"bob", b"box", b"has", b"highest",
b"is", b"office", b"the",
]
_START_TOKEN = _VOCAB.index(b"[CLS]")
_END_TOKEN = _VOCAB.index(b"[SEP]")
_MASK_TOKEN = _VOCAB.index(b"[MASK]")
_RANDOM_TOKEN = _VOCAB.index(b"[RANDOM]")
_UNK_TOKEN = _VOCAB.index(b"[UNK]")
_MAX_SEQ_LEN = 8
_MAX_PREDICTIONS_PER_BATCH = 5
_VOCAB_SIZE = len(_VOCAB)
lookup_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(
keys=_VOCAB,
key_dtype=tf.string,
values=tf.range(
tf.size(_VOCAB, out_type=tf.int64), dtype=tf.int64),
value_dtype=tf.int64),
num_oov_buckets=1
)
bert_tokenizer = text.BertTokenizer(lookup_table, token_out_type=tf.string)
bert_tokenizer.tokenize(examples["text_a"])
bert_tokenizer.tokenize(examples["text_b"])
bert_tokenizer = text.BertTokenizer(lookup_table, token_out_type=tf.int64)
segment_a = bert_tokenizer.tokenize(examples["text_a"])
segment_a
segment_b = bert_tokenizer.tokenize(examples["text_b"])
segment_b
segment_a = segment_a.merge_dims(-2, -1)
segment_a
segment_b = segment_b.merge_dims(-2, -1)
segment_b
trimmer = text.RoundRobinTrimmer(max_seq_length=[_MAX_SEQ_LEN])
trimmed = trimmer.trim([segment_a, segment_b])
trimmed
segments_combined, segments_ids = text.combine_segments(
[segment_a, segment_b],
start_of_sequence_id=_START_TOKEN, end_of_segment_id=_END_TOKEN)
segments_combined, segments_ids
random_selector = text.RandomItemSelector(
max_selections_per_batch=_MAX_PREDICTIONS_PER_BATCH,
selection_rate=0.2,
unselectable_ids=[_START_TOKEN, _END_TOKEN, _UNK_TOKEN]
)
selected = random_selector.get_selection_mask(
segments_combined, axis=1)
selected
input_ids = tf.ragged.constant([[19, 7, 21, 20, 9, 8], [13, 4, 16, 5], [15, 10, 12, 11, 6]])
mask_values_chooser = text.MaskValuesChooser(_VOCAB_SIZE, _MASK_TOKEN, 0.8)
mask_values_chooser.get_mask_values(input_ids)
masked_token_ids, masked_pos, masked_lm_ids = text.mask_language_model(
segments_combined,
item_selector=random_selector, mask_values_chooser=mask_values_chooser)
masked_token_ids
tf.gather(_VOCAB, masked_token_ids)
masked_pos
masked_lm_ids
tf.gather(_VOCAB, masked_lm_ids)
# Prepare and pad combined segment inputs
input_word_ids, input_mask = text.pad_model_inputs(
masked_token_ids, max_seq_length=_MAX_SEQ_LEN)
input_type_ids, _ = text.pad_model_inputs(
masked_token_ids, max_seq_length=_MAX_SEQ_LEN)
# Prepare and pad masking task inputs
masked_lm_positions, masked_lm_weights = text.pad_model_inputs(
masked_token_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
masked_lm_ids, _ = text.pad_model_inputs(
masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
model_inputs = {
"input_word_ids": input_word_ids,
"input_mask": input_mask,
"input_type_ids": input_type_ids,
"masked_lm_ids": masked_lm_ids,
"masked_lm_positions": masked_lm_positions,
"masked_lm_weights": masked_lm_weights,
}
model_inputs
def bert_pretrain_preprocess(vocab_table, features):
# Input is a string Tensor of documents, shape [batch, 1].
text_a = features["text_a"]
text_b = features["text_b"]
# Tokenize segments to shape [num_sentences, (num_words)] each.
tokenizer = text.BertTokenizer(
vocab_table,
token_out_type=tf.int64)
segments = [tokenizer.tokenize(text).merge_dims(
1, -1) for text in (text_a, text_b)]
# Truncate inputs to a maximum length.
trimmer = text.RoundRobinTrimmer(max_seq_length=6)
trimmed_segments = trimmer.trim(segments)
# Combine segments, get segment ids and add special tokens.
segments_combined, segment_ids = text.combine_segments(
trimmed_segments,
start_of_sequence_id=_START_TOKEN,
end_of_segment_id=_END_TOKEN)
# Apply dynamic masking task.
masked_input_ids, masked_lm_positions, masked_lm_ids = (
text.mask_language_model(
segments_combined,
random_selector,
mask_values_chooser,
)
)
# Prepare and pad combined segment inputs
input_word_ids, input_mask = text.pad_model_inputs(
masked_input_ids, max_seq_length=_MAX_SEQ_LEN)
input_type_ids, _ = text.pad_model_inputs(
masked_input_ids, max_seq_length=_MAX_SEQ_LEN)
# Prepare and pad masking task inputs
masked_lm_positions, masked_lm_weights = text.pad_model_inputs(
masked_input_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
masked_lm_ids, _ = text.pad_model_inputs(
masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
model_inputs = {
"input_word_ids": input_word_ids,
"input_mask": input_mask,
"input_type_ids": input_type_ids,
"masked_lm_ids": masked_lm_ids,
"masked_lm_positions": masked_lm_positions,
"masked_lm_weights": masked_lm_weights,
}
return model_inputs
dataset = tf.data.Dataset.from_tensors(examples)
dataset = dataset.map(functools.partial(
bert_pretrain_preprocess, lookup_table))
next(iter(dataset))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BERT Preprocessing with TF Text
Step2: Our data contains two text features and we can create a example tf.data.Dataset. Our goal is to create a function that we can supply Dataset.map() with to be used in training.
Step3: Tokenizing
Step4: Let's construct a text.BertTokenizer using the above vocabulary and tokenize the text inputs into a RaggedTensor.`.
Step5: Text output from text.BertTokenizer allows us see how the text is being tokenized, but the model requires integer IDs. We can set the token_out_type param to tf.int64 to obtain integer IDs (which are the indices into the vocabulary).
Step6: text.BertTokenizer returns a RaggedTensor with shape [batch, num_tokens, num_wordpieces]. Because we don't need the extra num_tokens dimensions for our current use case, we can merge the last two dimensions to obtain a RaggedTensor with shape [batch, num_wordpieces]
Step7: Content Trimming
Step8: trimmed now contains the segments where the number of elements across a batch is 8 elements (when concatenated along axis=-1).
Step9: Masked Language Model Task
Step10: Choosing the Masked Value
Step11: When supplied with a RaggedTensor input, text.MaskValuesChooser returns a RaggedTensor of the same shape with either _MASK_VALUE (0), a random ID, or the same unchanged id.
Step12: Let's dive deeper and examine the outputs of mask_language_model(). The output of masked_token_ids is
Step13: Remember that our input is encoded using a vocabulary. If we decode masked_token_ids using our vocabulary, we get
Step14: Notice that some wordpiece tokens have been replaced with either [MASK], [RANDOM] or a different ID value. masked_pos output gives us the indices (in the respective batch) of the tokens that have been replaced.
Step15: masked_lm_ids gives us the original value of the token.
Step16: We can again decode the IDs here to get human readable values.
Step17: Padding Model Inputs
Step18: Review
Step19: We previously constructed a tf.data.Dataset and we can now use our assembled preprocessing function bert_pretrain_preprocess() in Dataset.map(). This allows us to create an input pipeline for transforming our raw string data into integer inputs and feed directly into our model.
|
15,396 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np # modulo de computo numerico
import matplotlib.pyplot as plt # modulo de graficas
# esta linea hace que las graficas salgan en el notebook
%matplotlib inline
df = pd.DataFrame()
df['Pastor']=np.random.randint(100, size=7)
df['Tripas']=np.random.randint(100, size=7)
df['Chorizo']=np.random.randint(100, size=7)
df.index=['Lunes','Martes','Miercoles','Jueves','Viernes','Sabado','Domingo']
df
df.describe()
df['Chorizo'].describe()
df.ix['Lunes']
df.boxplot()
plt.title("Boxplot")
plt.show()
df['Tacos Total']=df['Pastor']+df['Tripas']+df['Chorizo']
df
df=df.drop("Chorizo",axis=1)
df
df.to_csv("Tacos.csv")
df=pd.read_csv("Tacos.csv")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Y yo para que quiero eso? De que sirve pandas?
Step2: No nos sirve nada vacio, entonces agreguemos le informacion!
Step3: Jugando con el Dataframe!
Step4: pero talvez solo queramos estadisticas de Pastor, entonces seria
Step5: o talvez solo nos interese del Lunes
Step6: Grafica de cajas 'Boxplot'
Step7: Combinando columnas
Step8: Borrando columnas
Step9: Exportando a otro formato
Step10: Leyendo un DataFrame de otro formato
|
15,397 | <ASSISTANT_TASK:>
Python Code:
from astropy import constants as const
import numpy as np
import matplotlib.pyplot as plt
#This just needed for the Notebook to show plots inline.
%matplotlib inline
print(const.e.value)
print(const.e)
#Atomic Number of Gold
Z = 72
e = const.e.value
E = 7.7e6*e
eps0 = const.eps0.value
sigma = const.a0.value/100.
#print(Z,e,E,eps0,sigma)
N = 1000000 #Start small, and increase to 1 million when you're sure the code runs correctly.
#Function to generate two sets of random Gaussian numbers.
def gaussian():
r = np.sqrt(-2*sigma*sigma*np.log(1-np.random.random()))
theta=2*np.pi*np.random.random()
x=r*np.cos(theta)
y=r*np.sin(theta)
return x,y
#Main Programme
count = 0 #Initate count of particles bounced back
for i in range(N):
x,y=gaussian()
b=np.sqrt(x*x+y*y)
#If this is true the particle is bounced back
if b<Z*e*e/(2*np.pi*eps0*E):
count +=1
print(count, "particles were reflected out of ", N, "incident")
print("this is a bounce fraction of {0:.5f} +/- {1:.5f}".format(count/N,np.sqrt(count)/N))
#Atomic Number of Gold
Z = 79
e = const.e.value
E = 7.7e6*e
eps0 = const.eps0.value
sigma = const.a0.value/100.
#print(Z,e,E,eps0,sigma)
N = 1000000 #Start small, and increase to 1 million when you're sure the code runs correctly.
#Main Programme
count = 0 #Initate count of particles bounced back
for i in range(N):
b= np.sqrt(-2*sigma*sigma*np.log(1-np.random.random()))
#If this is true the particle is bounced back
if b<Z*e*e/(2*np.pi*eps0*E):
count +=1
print(count, "particles were reflected out of ", N, "incident")
print("this is a bounce fraction of {0:.5f} +/- {1:.5f}".format(count/N,np.sqrt(count)/N))
?np.random.normal
#Atomic Number of Gold
Z = 79
e = const.e.value
E = 7.7e6*e
eps0 = const.eps0.value
sigma = const.a0.value/100.
print(Z,e,E,eps0,sigma)
N = 1000 #Start small, and increase to 1 million when you're sure the code runs correctly.
#Main Programme
count = 0 #Initate count of particles bounced back
for i in range(N):
x=np.random.normal(0,sigma,1)
y=np.random.normal(0,sigma,1)
b=np.sqrt(x*x+y*y)
#If this is true the particle is bounced back
if b<Z*e*e/(2*np.pi*eps0*E):
count +=1
print(count, "particles were reflected out of ", N, "incident")
print("this is a bounce fraction of {0:.5f} +/- {1:.5f}".format(count/N,np.sqrt(count)/N))
#Define the function
def f(x):
fx = (np.sin(1/(x*(2-x))))**2
return fx
#Integrate the function from x=0-2
#Note that you need to know the maximum value of the function
#over this range (which is y=1), and therefore the area of the box
#from which we draw random number is A=2.
N=1000000
k=0
for i in range(N):
x=2*np.random.random()
y=np.random.random()
if y<f(x):
k+=1
A=2.
I=A*k/N
print("The integral is equal to I = ",I)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice something about $b$?
Step2: In Class Exercise
|
15,398 | <ASSISTANT_TASK:>
Python Code:
from sympy.physics.mechanics import mechanics_printing
mechanics_printing()
from sympy import var, Function, pi
var("l1:4")
var("m1:4")
var("g t")
q1 = Function("q1")(t)
q2 = Function("q2")(t)
q3 = Function("q3")(t)
def DH(params):
from sympy import Matrix, sin, cos
a, d, α, θ = params
A = Matrix([[cos(θ), -sin(θ)*cos(α), sin(θ)*sin(α), a*cos(θ)],
[sin(θ), cos(θ)*cos(α), -cos(θ)*sin(α), a*sin(θ)],
[0, sin(α), cos(α), d],
[0, 0, 0, 1]])
return A
A1 = DH([0, l1, pi/2, q1])
A2 = DH([l2, 0, 0, q2])
A3 = DH([l3, 0, 0, q3])
As = [A1, A2, A3]
As
def transf_art(transformaciones):
from sympy import eye, simplify
Hs = [eye(4)]
for trans in transformaciones:
Hs.append(simplify(Hs[-1]*trans))
return Hs[1:]
Hs = transf_art(As)
Hs
ps = [H[0:3, 3:4] for H in Hs]
ps
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
Rs
from nose.tools import assert_equal
from sympy import Matrix, sin, cos, var
R1 = Matrix([[cos(q1), 0, sin(q1)],
[sin(q1), 0, -cos(q1)],
[0, 1, 0]])
R2 = Matrix([[cos(q1)*cos(q2), -sin(q2)*cos(q1), sin(q1)],
[sin(q1)*cos(q2), -sin(q2)*sin(q1), -cos(q1)],
[sin(q2), cos(q2), 0]])
R3 = Matrix([[cos(q1)*cos(q2+q3), -sin(q2+q3)*cos(q1), sin(q1)],
[sin(q1)*cos(q2+q3), -sin(q2+q3)*sin(q1), -cos(q1)],
[sin(q2+q3), cos(q2+q3), 0]])
assert_equal(Rs[0], R1)
assert_equal(Rs[1], R2)
assert_equal(Rs[2], R3)
q = [q1, q2, q3]
ps[1].jacobian(q)
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
Jvs
from nose.tools import assert_equal
assert_equal(Jvs[0], ps[0].jacobian(q))
assert_equal(Jvs[1], ps[1].jacobian(q))
assert_equal(Jvs[2], ps[2].jacobian(q))
o1 = Matrix([[0], [0], [q1]])
o1
o2 = Matrix([[0], [q2], [q1]])
o2
o3 = Matrix([[0], [q2 + q3], [q1]])
o3
os = [o1, o2, o3]
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
Jωs
from nose.tools import assert_equal
assert_equal(Jωs[0], os[0].jacobian(q))
assert_equal(Jωs[1], os[1].jacobian(q))
assert_equal(Jωs[2], os[2].jacobian(q))
def tens_iner(q):
from sympy import Matrix
Is = []
for i in range(len(q)):
Js = [var("J_{" + str(i+1) + "_" + eje + "}") for eje in "xyz"]
I = Matrix([[Js[0], 0, 0], [0, Js[1], 0], [0, 0, Js[2]]])
Is.append(I)
return Is
Is = tens_iner(q)
Is
ms = [m1, m2, m3]
def matriz_masas(ms, Jvs, Is, Jωs):
from sympy import zeros, expand, simplify
M = zeros(len(ms))
for m, Jv, I, Jω in zip(ms, Jvs, Is, Jωs):
M += simplify(expand(m*Jv.T*Jv + Jω.T*I*Jω))
return M
M = matriz_masas(ms, Jvs, Is, Jωs)
M
from sympy import simplify
simplify(M)
M[0].collect(l2**2).collect(l3**2).collect(m3).simplify()
M = simplify(M.applyfunc(lambda M: collect(M, l2**2)).applyfunc(lambda M: collect(M, l3**2)).applyfunc(lambda M: collect(M, m3)))
M
def christoffel(M, q, i, j, k):
from sympy import Rational, simplify
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
return simplify(simbolo)
from nose.tools import assert_equal
from sympy import Rational, expand
assert_equal(christoffel(M, q, 0,0,1), expand(Rational(1,2)*((m2+m3)*l2**2*sin(2*q2) + m3*l3**2*sin(2*(q2+q3))) + m3*l2*l3*sin(2*q2+q3)))
assert_equal(christoffel(M, q, 0,0,0), 0)
c113 = christoffel(M, q, 0,0,2)
c113
def simbolos_chris(M, q):
simbolos = []
for i in range(len(q)):
sim = []
for j in range(len(q)):
s = [christoffel(M, q, i, j, k) for k in range(len(q))]
sim.append(s)
simbolos.append(sim)
return simbolos
simbolos_christoffel = simbolos_chris(M, q)
simbolos_christoffel[0][0][2]
def matriz_coriolis(simbolos, q̇):
from sympy import Matrix
coriolis = []
for k in range(len(simbolos)):
cor = []
for j in range(len(simbolos)):
c=0
for i in range(len(simbolos)):
c+= simbolos[i][j][k]*q̇[i]
cor.append(c)
coriolis.append(cor)
return Matrix(coriolis)
C = simplify(matriz_coriolis(simbolos_christoffel, q̇))
C
def ener_pot(params):
m, h = params
U = m*g*h
return U
h1, h2, h3 = ps[0][2], ps[1][2], ps[2][2]
U1 = ener_pot([m1, h1])
U2 = ener_pot([m2, h2])
U3 = ener_pot([m3, h3])
U = U1 + U2 + U3
def vector_grav(U, q):
from sympy import Matrix
return Matrix([[U]]).jacobian(q).T
G = vector_grav(U, q)
G
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Y copiando la función para generar matrices de transformación homogéneas a partir de los parametros DH
Step2: He guardado todas las matrices de transformación homgénea en un solo arreglo, de tal manera que puedo hacer una función que tome todas las transformaciones de cada eslabon, y me devuelva las transformaciones a cada articulacion
Step3: Una vez obtenido esto, puedo obtener las posiciones de cada articulación con una List comprehension
Step4: Ejercicio
Step5: Si ahora declaramos un vector con todos los grados de libertad
Step6: podemos obtener el Jacobiano traslacional de cada articulacion con
Step7: Ejercicio
Step8: Un paso que tenemos que hacer manualmente es definir los vectores de orientación (compuesto por $\phi$, $\theta$ y $\psi$) ya que se tiene un sistema sobrerestringido, pero son lo suficientemente faciles de obtener
Step9: y si se guarda una lista con cada uno de estos vectores, se puede obtener el jacobiano rotacional de la misma manera que el traslacional
Step10: Ejercicio
Step11: Otra cosa que podemos hacer en automatico es definir los tensores de inercia necesarios para el manipulador, ya que esto solo depende del numero de grados de libertad, defino la función que va a tomar el vector con el estado del sistema, $q$, y va a calcular una lista con los tensores
Step12: definiré una lista con todas las masas de los eslabones
Step13: De tal manera que podamos hacer una función que tome estas, los jacobianos y los tensores de inercia,para calcular la matriz de masas
Step14: mmm... un poco grande, tratemos de simplificar un poco
Step15: mmm... un poco mejor, pero aun no es viable; los terminos del segundo y tercer grado de libertad son simples, el problema es el primero, tratemos de simplificar solo ese termino, intentaremos factorizar $l_2^2$ y $l_3^2$ y despues simplificar
Step16: esto se ve aceptable, apliquemoslo a toda la matriz
Step17: Ejercicio
Step18: Con esta función podemos calcular cualquier simbolo de Christoffel (recordando que los indices en Python empiezan en $0$
Step19: y crear una función que calcule todos los simbolos de Christoffel a partir de esta función
Step20: Y ya con los simbolos de Christoffel, calcular la matriz de Coriolis
Step21: En este punto tenemos un resultado lo suficientemente compacto para copiarlo a las definiciones numéricas, por lo que seguimos al vector de gravedad
Step22: Y calculando las energías potenciales
|
15,399 | <ASSISTANT_TASK:>
Python Code:
# 多行结果输出支持
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import numpy as np
np.set_printoptions(precision=4)
# 导入视频
# from IPython.display import YouTubeVideo
# YouTubeVideo("8iGzBMboA0I")
# 制作切片(tuple 形式)
dims = np.index_exp[10:28:1,3:13]
dims
x = np.arange(9.).reshape(3, 3)
# 有填充的功能
# 条件为真就从 x 选择,否则从 y 选择
np.where(x < 5, x, -1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 阅读文献
Step2: The SVD algorithm factorizes a matrix into one matrix with orthogonal columns and one with orthogonal rows (along with a diagonal matrix, which contains the relative importance of each factor)
|