Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Normal distribution
Standard normal distribution takes a bell curve. It is also called as gaussian distribution. Values in nature are believed to take a normal distribution. The equation for normal distribution is
$$y = \frac{1}{\sigma\sqrt{2\pi}} e^{\frac{(x-\mu)^2}{2\sigma^2}}$$
where
$\mu$ is mean
$\sigma$ is standard deviation
$\pi$ = 3.14159..
$e$ = 2.71828.. (natural log)
Step1: The above is the standard normal distribution. Its mean is 0 and SD is 1. About 95% values fall within $\mu \pm 2 SD$ and 98% within $\mu \pm 3 SD$
The area under this curve is 1 which gives the probability of values falling within the range of standard normal.
A common use is to find the probability of a value falling at a particular range. For instance, find $p(-2 \le z \le 2)$ which is the probability of a value falling within $\mu \pm 2SD$. This calculated by summing the area under the curve between these bounds.
$$p(-2 \le z \le 2) = 0.9544$$ which is 95.44% probability. Its z score is 0.9544.
Similarly
$$p(z \ge 5.1) = 0.00000029$$
Finding z score and p values using SciPy
The standard normal is useful as a z table to look up the probability of a z score (x axis). You can use Scipy to accomplish this.
Levels of significance
By rule of thumb, a z score greater than 0.005 is considered significant as such a value has a very low probability of occuring. Thus, there is less chance of it occurring randomly and hence, there is probably a force acting on it (significant force, not random chance).
Transformation to standard normal
If the distribution of a phenomena follows normal dist, then you can transform it to standard normal, so you can measure the z scores. To do so,
$$std normal value = \frac{observed - \mu}{\sigma}$$ You subtract the mean and divide by SD of the distribution.
Example
Let X be age of US presidents at inaugration. $X \in N(\mu = 54.8, \sigma=6.2)$. What is the probability of choosing a president at random that is less than 44 years of age.
We need to find $p(x<44)$. First we need to transform to standard normal.
$$p(z< \frac{44-54.8}{6.2})$$
$$p(z<-1.741) = 0.0409 \approx 4\%$$
Finding z score and p values using SciPy
The standard normal is useful as a z table to look up the probability of a z score (x axis). You can use Scipy to accomplish this.
Step2: Let us try for some common z scores
Step3: As you noticed, the norm.cdf() function gives the cumulative probability (left tail) from -3 to 3 approx. If you need right tailed distribution, you simply subtract this value from 1.
Finding z score from a p value
Sometimes, you have the probability (p value), but want to find the z score or how many SD does this value fall from mean. You can do this inverse using ppf().
Step4: As is the ppf() function gives only positive z scores, you need to apply $\pm$ to it.
Transformation to standard normal and machine learning
Transforming features to standard normal has applications in machine learning. As each feature has a different unit, their range, standard deviation vary. Hence we scale them all to standard normal distribution with mean=0 and SD=1. This way a learner finds those variables that are truly influencial and not simply because it has a larger range.
To accomplish this easily, we use scikit-learn's StandardScaler object as shown below
Step5: Now let us use scikit-learn to easily transform this dataset
Step6: As you see above, the shape of distribution is the same, just the values are scaled.
Assessing normality of a distribution
To assess how normal a distribution of values is, we sort the values, then plot them against sorted values of standard normal distribution. If the values fall on a straight line, then they are normally distributed, else they exhibit skewness and or kurtosis. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
vals = np.random.standard_normal(100000)
len(vals)
fig, ax = plt.subplots(1,1)
hist_vals = ax.hist(vals, bins=200, color='red', density=True)
Explanation: Normal distribution
Standard normal distribution takes a bell curve. It is also called as gaussian distribution. Values in nature are believed to take a normal distribution. The equation for normal distribution is
$$y = \frac{1}{\sigma\sqrt{2\pi}} e^{\frac{(x-\mu)^2}{2\sigma^2}}$$
where
$\mu$ is mean
$\sigma$ is standard deviation
$\pi$ = 3.14159..
$e$ = 2.71828.. (natural log)
End of explanation
import scipy.stats as st
# compute the p value for a z score
st.norm.cdf(-1.741)
Explanation: The above is the standard normal distribution. Its mean is 0 and SD is 1. About 95% values fall within $\mu \pm 2 SD$ and 98% within $\mu \pm 3 SD$
The area under this curve is 1 which gives the probability of values falling within the range of standard normal.
A common use is to find the probability of a value falling at a particular range. For instance, find $p(-2 \le z \le 2)$ which is the probability of a value falling within $\mu \pm 2SD$. This calculated by summing the area under the curve between these bounds.
$$p(-2 \le z \le 2) = 0.9544$$ which is 95.44% probability. Its z score is 0.9544.
Similarly
$$p(z \ge 5.1) = 0.00000029$$
Finding z score and p values using SciPy
The standard normal is useful as a z table to look up the probability of a z score (x axis). You can use Scipy to accomplish this.
Levels of significance
By rule of thumb, a z score greater than 0.005 is considered significant as such a value has a very low probability of occuring. Thus, there is less chance of it occurring randomly and hence, there is probably a force acting on it (significant force, not random chance).
Transformation to standard normal
If the distribution of a phenomena follows normal dist, then you can transform it to standard normal, so you can measure the z scores. To do so,
$$std normal value = \frac{observed - \mu}{\sigma}$$ You subtract the mean and divide by SD of the distribution.
Example
Let X be age of US presidents at inaugration. $X \in N(\mu = 54.8, \sigma=6.2)$. What is the probability of choosing a president at random that is less than 44 years of age.
We need to find $p(x<44)$. First we need to transform to standard normal.
$$p(z< \frac{44-54.8}{6.2})$$
$$p(z<-1.741) = 0.0409 \approx 4\%$$
Finding z score and p values using SciPy
The standard normal is useful as a z table to look up the probability of a z score (x axis). You can use Scipy to accomplish this.
End of explanation
[st.norm.cdf(-3), st.norm.cdf(-1), st.norm.cdf(0), st.norm.cdf(1), st.norm.cdf(2)]
Explanation: Let us try for some common z scores:
End of explanation
# Find Z score for a probability of 0.97 (2sd)
st.norm.ppf(0.97)
[st.norm.ppf(0.95), st.norm.ppf(0.97), st.norm.ppf(0.98), st.norm.ppf(0.99)]
Explanation: As you noticed, the norm.cdf() function gives the cumulative probability (left tail) from -3 to 3 approx. If you need right tailed distribution, you simply subtract this value from 1.
Finding z score from a p value
Sometimes, you have the probability (p value), but want to find the z score or how many SD does this value fall from mean. You can do this inverse using ppf().
End of explanation
demo_dist = 55 + np.random.randn(200) * 3.4
std_normal = np.random.randn(200)
[demo_dist.mean(), demo_dist.std(), demo_dist.min(), demo_dist.max()]
[std_normal.mean(), std_normal.std(), std_normal.min(), std_normal.max()]
Explanation: As is the ppf() function gives only positive z scores, you need to apply $\pm$ to it.
Transformation to standard normal and machine learning
Transforming features to standard normal has applications in machine learning. As each feature has a different unit, their range, standard deviation vary. Hence we scale them all to standard normal distribution with mean=0 and SD=1. This way a learner finds those variables that are truly influencial and not simply because it has a larger range.
To accomplish this easily, we use scikit-learn's StandardScaler object as shown below:
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
demo_dist = demo_dist.reshape(200,1)
demo_dist_scaled = scaler.fit_transform(demo_dist)
[round(demo_dist_scaled.mean(),3), demo_dist_scaled.std(), demo_dist_scaled.min(), demo_dist_scaled.max()]
fig, axs = plt.subplots(2,2, figsize=(15,8))
p1 = axs[0][0].scatter(sorted(demo_dist), sorted(std_normal))
axs[0][0].set_title("Scatter of original dataset against standard normal")
p1 = axs[0][1].scatter(sorted(demo_dist_scaled), sorted(std_normal))
axs[0][1].set_title("Scatter of scaled dataset against standard normal")
p2 = axs[1][0].hist(demo_dist, bins=50)
axs[1][0].set_title("Histogram of original dataset against standard normal")
p3 = axs[1][1].hist(demo_dist_scaled, bins=50)
axs[1][1].set_title("Histogram of scaled dataset against standard normal")
Explanation: Now let us use scikit-learn to easily transform this dataset
End of explanation
demo_dist = 55 + np.random.randn(200) * 3.4
std_normal = np.random.randn(200)
demo_dist = sorted(demo_dist)
std_normal = sorted(std_normal)
plt.scatter(demo_dist, std_normal)
Explanation: As you see above, the shape of distribution is the same, just the values are scaled.
Assessing normality of a distribution
To assess how normal a distribution of values is, we sort the values, then plot them against sorted values of standard normal distribution. If the values fall on a straight line, then they are normally distributed, else they exhibit skewness and or kurtosis.
End of explanation |
15,301 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
how to create dummy variables for dataframe df1
| Python Code::
import pandas as pd
pd.get_dummies(df1.town)
|
15,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Centrality
This evaluates the Eigenvector Centrality and PageRank implemented in Python against C++-native EVZ and PageRank. The Python implementation uses SciPy (and thus ARPACK) to compute the eigenvectors, while the C++ method implements a power iteration method itself.
Please note
Step1: First, we just compute the Python EVZ and display a sample. The "scores()" method returns a list of centrality scores in order of the vertices. Thus, what you see below are the (normalized, see the respective argument) centrality scores for G.nodes()[0], G.nodes()[1], ...
Step2: We now take a look at the 10 most central vertices according to the four heuristics. Here, the centrality algorithms offer the ranking() method that returns a list of (vertex, centrality) ordered by centrality. We first compute the remaining values...
Step3: ... then display it. What you will see is a list of the 10 most important vertices and their respective centralities according to the C++ / Python version of eigenvector centrality
Step4: If everything went well, this should look at least similar. Now we do the same for the PageRank instead of EVZ
Step5: If everything went well, these should look similar, too.
To make sure that not just the top scoring vertices are comparable in both implementations, here are 10 randomly selected vertices in comparison
Step6: Finally, we take a look at the relative differences between the computed centralities for the vertices | Python Code:
cd ../../
import networkit
import pandas as pd
import random as rd
G = networkit.graphio.readGraph("input/celegans_metabolic.graph", networkit.Format.METIS)
Explanation: Centrality
This evaluates the Eigenvector Centrality and PageRank implemented in Python against C++-native EVZ and PageRank. The Python implementation uses SciPy (and thus ARPACK) to compute the eigenvectors, while the C++ method implements a power iteration method itself.
Please note: This notebook requires the pandas package. If you do not have it installed, you can use the "Centrality" notebook instead - but this one will look much nicer, so install pandas. ;)
End of explanation
evzSciPy = networkit.centrality.SciPyEVZ(G, normalized=True)
evzSciPy.run()
scoresTableEVZ = pd.DataFrame({"Python EVZ": evzSciPy.scores()[:10]})
scoresTableEVZ
Explanation: First, we just compute the Python EVZ and display a sample. The "scores()" method returns a list of centrality scores in order of the vertices. Thus, what you see below are the (normalized, see the respective argument) centrality scores for G.nodes()[0], G.nodes()[1], ...
End of explanation
evz = networkit.centrality.EigenvectorCentrality(G, True)
evz.run()
pageRank = networkit.centrality.PageRank(G, 0.95)
pageRank.run()
pageRankSciPy = networkit.centrality.SciPyPageRank(G, 0.95, normalized=True)
pageRankSciPy.run()
Explanation: We now take a look at the 10 most central vertices according to the four heuristics. Here, the centrality algorithms offer the ranking() method that returns a list of (vertex, centrality) ordered by centrality. We first compute the remaining values...
End of explanation
rankTableEVZ = pd.DataFrame({"Python EVZ": evzSciPy.ranking()[:10], "C++ EVZ": evz.ranking()[:10]})
rankTableEVZ
Explanation: ... then display it. What you will see is a list of the 10 most important vertices and their respective centralities according to the C++ / Python version of eigenvector centrality:
End of explanation
rankTablePR = pd.DataFrame({"Python PageRank": pageRankSciPy.ranking()[:10], "C++ PageRank": pageRank.ranking()[:10]})
rankTablePR
Explanation: If everything went well, this should look at least similar. Now we do the same for the PageRank instead of EVZ:
End of explanation
vertices = rd.sample(G.nodes(), 10)
randTableEVZ = pd.DataFrame({"Python EVZ": evzSciPy.scores(), "C++ EVZ": evz.scores()})
randTableEVZ.loc[vertices]
Explanation: If everything went well, these should look similar, too.
To make sure that not just the top scoring vertices are comparable in both implementations, here are 10 randomly selected vertices in comparison:
End of explanation
differences = [(max(x[0], x[1]) / min(x[0], x[1])) - 1 for x in zip(evz.scores(), evzSciPy.scores())]
print("Average relative difference: {}".format(sum(differences) / len(differences)))
print("Maximum relative difference: {}".format(max(differences)))
Explanation: Finally, we take a look at the relative differences between the computed centralities for the vertices:
End of explanation |
15,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
As explained in the Composing Data and Containers tutorials, HoloViews allows you to build up hierarchical containers that express the natural relationships between your data items, in whatever multidimensional space best characterizes your application domain. Once your data is in such containers, individual visualizations are then made by choosing subregions of this multidimensional space, either smaller numeric ranges (as in cropping of photographic images), or lower-dimensional subsets (as in selecting frames from a movie, or a specific movie from a large library), or both (as in selecting a cropped version of a frame from a specific movie from a large library).
In this tutorial, we show how to specify such selections, using four different (but related) operations that can act on an element e
Step1: 1D Elements
Step2: The two bins in a different color show the selected region, overlaid on top of the full histogram. We can also access the value for a specific bin in the Histogram. A continuous-valued index that falls inside a particular bin will return the corresponding value or frequency.
Step3: We can slice a Curve the same way
Step4: Here again the region in a different color is the specified subregion, and we've also marked each discrete point with a dot using the Scatter Element. As before we can also get the value for a specific sample point; whatever x-index is provided will snap to the closest sample point and return the dependent value
Step5: It is important to note that an index (or a list of indices, as for the 2D and 3D cases below) will always return the raw indexed (dependent) value, i.e. a number. A slice (indicated with
Step6: 2D and 3D Elements
Step7: However, indexing is not supported in this space, because there could be many possible points near a given set of coordinates, and finding the nearest one would require a search across potentially incommensurable dimensions, which is poorly defined and difficult to support.
Slicing in 3D works much like slicing in 2D, but indexing is not supported for the same reason as in 2D
Step8: 2D Raster and Image
Step9: Sampling
Sampling is essentially a process of indexing an Element at multiple index locations, and collecting the results. Thus any Element that can be indexed can also be sampled. Compared to regular indexing, sampling is different in that multiple indices may be supplied at the same time. Also, indexing will only return the value at that location, whereas the return type from a sampling operation is another Element type, usually either a Table or a Curve, to allow both key and value dimensions to be returned.
Sampling Elements
Sampling can use either an explicit list of samples, or or by passing the samples for each dimension keyword arguments.
We'll start by taking a single sample of an Image object, to make it clear how sampling and indexing are similar operations yet different in their results
Step10: Here, the output of the indexing operation is the value (0.1965823616800535) from the location closest to the specified , whereas .sample() returns a Table that lists both the coordinates and the value, and slicing (in previous section) returns an Element of the same type, not a Table.
Next we can try sampling along only one Dimension on our 2D Image, leaving us with a 1D Element (in this case a Curve)
Step11: Sampling works on any regularly sampled Element type. For example, we can select multiple samples along the x-axis of a Curve.
Step12: Sampling HoloMaps
Sampling is often useful when you have more data than you wish to visualize or analyze at one time. First, let's create a HoloMap containing a number of observations of some noisy data.
Step13: HoloMaps also provide additional functionality to perform regular sampling on your data. In this case we'll take 3x3 subsamples of each of the Images.
Step14: By supplying bounds in as a (left, bottom, right, top) tuple we can also sample a subregion of our images
Step15: Since this kind of sampling is only well supported for continuous coordinate systems, we can only apply this kind of sampling to Image types for now.
Sampling Charts
Sampling Chart-type Elements like Curve, Scatter, Histogram is only supported by providing an explicit list of samples, since those Elements have no underlying regular grid.
Step16: Alternatively, you can always deconstruct your data into a Table (see the Columnar Data tutorial) and perform select operations instead. This is also the easiest way to sample NdElement types like Bars. Individual samples should be supplied as a set, while ranges can be specified as a two-tuple. | Python Code:
import numpy as np
import holoviews as hv
hv.notebook_extension()
%opts Layout [fig_size=125] Points [size_index=None] (s=50) Scatter3D [size_index=None]
%opts Bounds (linewidth=2 color='k') {+axiswise} Text (fontsize=16 color='k') Image (cmap='Reds')
Explanation: As explained in the Composing Data and Containers tutorials, HoloViews allows you to build up hierarchical containers that express the natural relationships between your data items, in whatever multidimensional space best characterizes your application domain. Once your data is in such containers, individual visualizations are then made by choosing subregions of this multidimensional space, either smaller numeric ranges (as in cropping of photographic images), or lower-dimensional subsets (as in selecting frames from a movie, or a specific movie from a large library), or both (as in selecting a cropped version of a frame from a specific movie from a large library).
In this tutorial, we show how to specify such selections, using four different (but related) operations that can act on an element e:
| Operation | Example syntax | Description |
|:---------------|:----------------:|:-------------|
| indexing | e[5.5], e[3,5.5] | Selecting a single data value, returning one actual numerical value from the existing data
| slice | e[3:5.5], e[3:5.5,0:1] | Selecting a contiguous portion from an Element, returning the same type of Element
| sample | e.sample(y=5.5),<br>e.sample((3,3)) | Selecting one or more regularly spaced data values, returning a new type of Element
| select | e.select(y=5.5),<br>e.select(y=(3,5.5)) | More verbose notation covering all supporting slice and index operations by dimension name.
These operations are all concerned with selecting some subset of your data values, without combining across data values (e.g. averaging) or otherwise transforming your actual data. In the Columnar Data tutorial we will look at other operations on the data that reduce, summarize, or transform the data in other ways, rather than selections as covered here.
We'll be going through each operation in detail and provide a visual illustration to help make the semantics of each operation clear. This Tutorial assumes that you are familiar with continuous and discrete coordinate systems, so please review our Continuous Coordinates Tutorial if you have not done so already.
Indexing and slicing Elements
In the Exploring Data Tutorial we saw examples of how to select individual elements embedded in a multi-dimensional space. We also briefly introduced "deep slicing" of the RGB elements to select a subregion of the images. The Continuous Coordinates Tutorial covered slicing and indexing in Elements representing continuous coordinate coordinate systems such as Image types. Here we'll be going through each operation in full detail, providing a visual illustration to help make the semantics of each operation clear.
How the Element may be indexed depends on the key dimensions (or kdims) of the Element. It is thus important to consider the nature and dimensionality of your data when choosing the Element type for it.
End of explanation
np.random.seed(42)
edges, data = np.histogram(np.random.randn(100))
hist = hv.Histogram(edges, data)
subregion = hist[0:1]
hist * subregion
Explanation: 1D Elements: Slicing and indexing
Certain Chart elements support both single-dimensional indexing and slicing: Scatter, Curve, Histogram, and ErrorBars. Here we'll look at how we can easily slice a Histogram to select a subregion of it:
End of explanation
hist[0.25], hist[0.5], hist[0.55]
Explanation: The two bins in a different color show the selected region, overlaid on top of the full histogram. We can also access the value for a specific bin in the Histogram. A continuous-valued index that falls inside a particular bin will return the corresponding value or frequency.
End of explanation
xs = np.linspace(0, np.pi*2, 21)
curve = hv.Curve((xs, np.sin(xs)))
subregion = curve[np.pi/2:np.pi*1.5]
curve * subregion * hv.Scatter(curve)
Explanation: We can slice a Curve the same way:
End of explanation
curve[4.05], curve[4.1], curve[4.17], curve[4.3]
Explanation: Here again the region in a different color is the specified subregion, and we've also marked each discrete point with a dot using the Scatter Element. As before we can also get the value for a specific sample point; whatever x-index is provided will snap to the closest sample point and return the dependent value:
End of explanation
curve[4:4.5]
Explanation: It is important to note that an index (or a list of indices, as for the 2D and 3D cases below) will always return the raw indexed (dependent) value, i.e. a number. A slice (indicated with :), on the other hand, will retain the Element type even in cases where the plot might not be useful, such as having only a single value, two values, or no value at all in that range:
End of explanation
r = np.arange(0, 1, 0.005)
xs, ys = (r * fn(85*np.pi*r) for fn in (np.cos, np.sin))
paths = hv.Points((xs, ys))
paths + paths[0:1, 0:1]
Explanation: 2D and 3D Elements: slicing
For data defined in a 2D space, there are 2D equivalents of the 1D Curve and Scatter types. A Points, for example, can be thought of as a number of points in a 2D space.
End of explanation
xs = np.linspace(0, np.pi*8, 201)
scatter = hv.Scatter3D((xs, np.sin(xs), np.cos(xs)))
scatter + scatter[5:10, :, 0:]
Explanation: However, indexing is not supported in this space, because there could be many possible points near a given set of coordinates, and finding the nearest one would require a search across potentially incommensurable dimensions, which is poorly defined and difficult to support.
Slicing in 3D works much like slicing in 2D, but indexing is not supported for the same reason as in 2D:
End of explanation
%opts Image (cmap='Blues') Bounds (color='red')
np.random.seed(0)
extents = (0, 0, 10, 10)
img = hv.Image(np.random.rand(10, 10), bounds=extents)
img_slice = img[1:9,4:5]
box = hv.Bounds((1,4,9,5))
img*box + img_slice
img[4.2,4.2], img[4.3,4.2], img[5.0,4.2]
Explanation: 2D Raster and Image: slicing and indexing
Raster and the various other image-like objects (Images, RGB, HSV, etc.) can all sliced and indexed, as can Surface, because they all have an underlying regular grid of key dimension values:
End of explanation
img_coords = hv.Points(img.table(), extents=extents)
labeled_img = img * img_coords * hv.Points([img.closest([(5.1,4.9)])])(style=dict(color='r'))
img + labeled_img + img.sample([(5.1,4.9)])
img[5.1,4.9]
Explanation: Sampling
Sampling is essentially a process of indexing an Element at multiple index locations, and collecting the results. Thus any Element that can be indexed can also be sampled. Compared to regular indexing, sampling is different in that multiple indices may be supplied at the same time. Also, indexing will only return the value at that location, whereas the return type from a sampling operation is another Element type, usually either a Table or a Curve, to allow both key and value dimensions to be returned.
Sampling Elements
Sampling can use either an explicit list of samples, or or by passing the samples for each dimension keyword arguments.
We'll start by taking a single sample of an Image object, to make it clear how sampling and indexing are similar operations yet different in their results:
End of explanation
sampled = img.sample(y=5)
labeled_img = img * img_coords * hv.Points(zip(sampled['x'], [img.closest(y=5)]*10))
img + labeled_img + sampled
Explanation: Here, the output of the indexing operation is the value (0.1965823616800535) from the location closest to the specified , whereas .sample() returns a Table that lists both the coordinates and the value, and slicing (in previous section) returns an Element of the same type, not a Table.
Next we can try sampling along only one Dimension on our 2D Image, leaving us with a 1D Element (in this case a Curve):
End of explanation
xs = np.arange(10)
samples = [2, 4, 6, 8]
curve = hv.Curve(zip(xs, np.sin(xs)))
curve_samples = hv.Scatter(zip(xs, [0] * 10)) * hv.Scatter(zip(samples, [0]*len(samples)))
curve + curve_samples + curve.sample(samples)
Explanation: Sampling works on any regularly sampled Element type. For example, we can select multiple samples along the x-axis of a Curve.
End of explanation
obs_hmap = hv.HoloMap({i: hv.Image(np.random.randn(10, 10), bounds=extents)
for i in range(3)}, key_dimensions=['Observation'])
Explanation: Sampling HoloMaps
Sampling is often useful when you have more data than you wish to visualize or analyze at one time. First, let's create a HoloMap containing a number of observations of some noisy data.
End of explanation
sample_style = dict(edgecolors='k', alpha=1)
all_samples = obs_hmap.table().to.scatter3d()(style=dict(alpha=0.15))
sampled = obs_hmap.sample((3,3))
subsamples = sampled.to.scatter3d()(style=sample_style)
all_samples * subsamples + sampled
Explanation: HoloMaps also provide additional functionality to perform regular sampling on your data. In this case we'll take 3x3 subsamples of each of the Images.
End of explanation
sampled = obs_hmap.sample((3,3), bounds=(2,5,5,10))
subsamples = sampled.to.scatter3d()(style=sample_style)
all_samples * subsamples + sampled
Explanation: By supplying bounds in as a (left, bottom, right, top) tuple we can also sample a subregion of our images:
End of explanation
xs = np.arange(10)
extents = (0, 0, 2, 10)
curve = hv.HoloMap({(i) : hv.Curve(zip(xs, np.sin(xs)*i))
for i in np.linspace(0.5, 1.5, 3)},
key_dimensions=['Observation'])
all_samples = curve.table().to.points()
sampled = curve.sample([0, 2, 4, 6, 8])
sampling = all_samples * sampled.to.points(extents=extents)(style=dict(color='r'))
sampling + sampled
Explanation: Since this kind of sampling is only well supported for continuous coordinate systems, we can only apply this kind of sampling to Image types for now.
Sampling Charts
Sampling Chart-type Elements like Curve, Scatter, Histogram is only supported by providing an explicit list of samples, since those Elements have no underlying regular grid.
End of explanation
sampled = curve.table().select(Observation=(0, 1.1), x={0, 2, 4, 6, 8})
sampling = all_samples * sampled.to.points(extents=extents)(style=dict(color='r'))
sampling + sampled
Explanation: Alternatively, you can always deconstruct your data into a Table (see the Columnar Data tutorial) and perform select operations instead. This is also the easiest way to sample NdElement types like Bars. Individual samples should be supplied as a set, while ranges can be specified as a two-tuple.
End of explanation |
15,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optically pumped magnetometer (OPM) data
In this dataset, electrical median nerve stimulation was delivered to the
left wrist of the subject. Somatosensory evoked fields were measured using
nine QuSpin SERF OPMs placed over the right-hand side somatomotor area. Here
we demonstrate how to localize these custom OPM data in MNE.
Step1: Prepare data for localization
First we filter and epoch the data
Step2: Examine our coordinate alignment for source localization and compute a
forward operator
Step3: Perform dipole fitting
Step4: Perform minimum-norm localization
Due to the small number of sensors, there will be some leakage of activity
to areas with low/no sensitivity. Constraining the source space to
areas we are sensitive to might be a good idea. | Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')
coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
Explanation: Optically pumped magnetometer (OPM) data
In this dataset, electrical median nerve stimulation was delivered to the
left wrist of the subject. Somatosensory evoked fields were measured using
nine QuSpin SERF OPMs placed over the right-hand side somatomotor area. Here
we demonstrate how to localize these custom OPM data in MNE.
End of explanation
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(None, 90, h_trans_bandwidth=10.)
raw.notch_filter(50., notch_widths=1)
# Set epoch rejection threshold a bit larger than for SQUIDs
reject = dict(mag=2e-10)
tmin, tmax = -0.5, 1
# Find median nerve stimulator trigger
event_id = dict(Median=257)
events = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')
picks = mne.pick_types(raw.info, meg=True, eeg=False)
# We use verbose='error' to suppress warning about decimation causing aliasing,
# ideally we would low-pass and then decimate instead
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, verbose='error',
reject=reject, picks=picks, proj=False, decim=10,
preload=True)
evoked = epochs.average()
evoked.plot()
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs, raw
Explanation: Prepare data for localization
First we filter and epoch the data:
End of explanation
bem = mne.read_bem_solution(bem_fname)
trans = mne.transforms.Transform('head', 'mri') # identity transformation
# To compute the forward solution, we must
# provide our temporary/custom coil definitions, which can be done as::
#
# with mne.use_coil_def(coil_def_fname):
# fwd = mne.make_forward_solution(
# raw.info, trans, src, bem, eeg=False, mindist=5.0,
# n_jobs=1, verbose=True)
fwd = mne.read_forward_solution(fwd_fname)
# use fixed orientation here just to save memory later
mne.convert_forward_solution(fwd, force_fixed=True, copy=False)
with mne.use_coil_def(coil_def_fname):
fig = mne.viz.plot_alignment(evoked.info, trans=trans, subject=subject,
subjects_dir=subjects_dir,
surfaces=('head', 'pial'), bem=bem)
mne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,
focalpoint=(0.02, 0, 0.04))
Explanation: Examine our coordinate alignment for source localization and compute a
forward operator:
<div class="alert alert-info"><h4>Note</h4><p>The Head<->MRI transform is an identity matrix, as the
co-registration method used equates the two coordinate
systems. This mis-defines the head coordinate system
(which should be based on the LPA, Nasion, and RPA)
but should be fine for these analyses.</p></div>
End of explanation
# Fit dipoles on a subset of time points
with mne.use_coil_def(coil_def_fname):
dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.040, 0.080),
cov, bem, trans, verbose=True)
idx = np.argmax(dip_opm.gof)
print('Best dipole at t=%0.1f ms with %0.1f%% GOF'
% (1000 * dip_opm.times[idx], dip_opm.gof[idx]))
# Plot N20m dipole as an example
dip_opm.plot_locations(trans, subject, subjects_dir,
mode='orthoview', idx=idx)
Explanation: Perform dipole fitting
End of explanation
inverse_operator = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov, loose=0., depth=None)
del fwd, cov
method = "MNE"
snr = 3.
lambda2 = 1. / snr ** 2
stc = mne.minimum_norm.apply_inverse(
evoked, inverse_operator, lambda2, method=method,
pick_ori=None, verbose=True)
# Plot source estimate at time of best dipole fit
brain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,
initial_time=dip_opm.times[idx],
clim=dict(kind='percent', lims=[99, 99.9, 99.99]),
size=(400, 300), background='w')
Explanation: Perform minimum-norm localization
Due to the small number of sensors, there will be some leakage of activity
to areas with low/no sensitivity. Constraining the source space to
areas we are sensitive to might be a good idea.
End of explanation |
15,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train a tf.keras model for MNIST to be pruned and clustered
Step3: Evaluate the baseline model and save it for later usage
Step4: Prune and fine-tune the model to 50% sparsity
Apply the prune_low_magnitude() API to achieve the pruned model that is to be clustered in the next step. Refer to the pruning comprehensive guide for more information on the pruning API.
Define the model and apply the sparsity API
Note that the pre-trained model is used.
Step5: Fine-tune the model, check sparsity, and evaluate the accuracy against baseline
Fine-tune the model with pruning for 3 epochs.
Step6: Define helper functions to calculate and print the sparsity and clusters of the model.
Step7: Let's strip the pruning wrapper first, then check that the model kernels were correctly pruned.
Step8: Apply sparsity preserving clustering and check its effect on model sparsity in both cases
Next, apply sparsity preserving clustering on the pruned model and observe the number of clusters and check that the sparsity is preserved.
Step9: Strip the clustering wrapper first, then check that the model is correctly pruned and clustered.
Step10: Apply QAT and PCQAT and check effect on model clusters and sparsity
Next, apply both QAT and PCQAT on the sparse clustered model and observe that PCQAT preserves weight sparsity and clusters in your model. Note that the stripped model is passed to the QAT and PCQAT API.
Step11: See compression benefits of PCQAT model
Define helper function to get zipped model file.
Step12: Observe that applying sparsity, clustering and PCQAT to a model yields significant compression benefits.
Step13: See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
Step14: Evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import zipfile
import os
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/combine/pcqat_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/pcqat_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/pcqat_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/combine/pcqat_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Sparsity and cluster preserving quantization aware training (PCQAT) Keras example
Overview
This is an end to end example showing the usage of the sparsity and cluster preserving quantization aware training (PCQAT) API, part of the TensorFlow Model Optimization Toolkit's collaborative optimization pipeline.
Other pages
For an introduction to the pipeline and other available techniques, see the collaborative optimization overview page.
Contents
In the tutorial, you will:
Train a tf.keras model for the MNIST dataset from scratch.
Fine-tune the model with pruning and see the accuracy and observe that the model was successfully pruned.
Apply sparsity preserving clustering on the pruned model and observe that the sparsity applied earlier has been preserved.
Apply QAT and observe the loss of sparsity and clusters.
Apply PCQAT and observe that both sparsity and clustering applied earlier have been preserved.
Generate a TFLite model and observe the effects of applying PCQAT on it.
Compare the sizes of the different models to observe the compression benefits of applying sparsity followed by the collaborative optimization techniques of sparsity preserving clustering and PCQAT.
Compare the accurracy of the fully optimized model with the un-optimized baseline model accuracy.
Setup
You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.
End of explanation
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3),
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
opt = tf.keras.optimizers.Adam(learning_rate=1e-3)
# Train the digit classification model
model.compile(optimizer=opt,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
Explanation: Train a tf.keras model for MNIST to be pruned and clustered
End of explanation
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Explanation: Evaluate the baseline model and save it for later usage
End of explanation
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100)
}
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep()
]
pruned_model = prune_low_magnitude(model, **pruning_params)
# Use smaller learning rate for fine-tuning
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
pruned_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
Explanation: Prune and fine-tune the model to 50% sparsity
Apply the prune_low_magnitude() API to achieve the pruned model that is to be clustered in the next step. Refer to the pruning comprehensive guide for more information on the pruning API.
Define the model and apply the sparsity API
Note that the pre-trained model is used.
End of explanation
# Fine-tune model
pruned_model.fit(
train_images,
train_labels,
epochs=3,
validation_split=0.1,
callbacks=callbacks)
Explanation: Fine-tune the model, check sparsity, and evaluate the accuracy against baseline
Fine-tune the model with pruning for 3 epochs.
End of explanation
def print_model_weights_sparsity(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
if "kernel" not in weight.name or "centroid" in weight.name:
continue
weight_size = weight.numpy().size
zero_num = np.count_nonzero(weight == 0)
print(
f"{weight.name}: {zero_num/weight_size:.2%} sparsity ",
f"({zero_num}/{weight_size})",
)
def print_model_weight_clusters(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
# ignore auxiliary quantization weights
if "quantize_layer" in weight.name:
continue
if "kernel" in weight.name:
unique_count = len(np.unique(weight))
print(
f"{layer.name}/{weight.name}: {unique_count} clusters "
)
Explanation: Define helper functions to calculate and print the sparsity and clusters of the model.
End of explanation
stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model)
print_model_weights_sparsity(stripped_pruned_model)
Explanation: Let's strip the pruning wrapper first, then check that the model kernels were correctly pruned.
End of explanation
import tensorflow_model_optimization as tfmot
from tensorflow_model_optimization.python.core.clustering.keras.experimental import (
cluster,
)
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
cluster_weights = cluster.cluster_weights
clustering_params = {
'number_of_clusters': 8,
'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS,
'preserve_sparsity': True
}
sparsity_clustered_model = cluster_weights(stripped_pruned_model, **clustering_params)
sparsity_clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train sparsity preserving clustering model:')
sparsity_clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)
Explanation: Apply sparsity preserving clustering and check its effect on model sparsity in both cases
Next, apply sparsity preserving clustering on the pruned model and observe the number of clusters and check that the sparsity is preserved.
End of explanation
stripped_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model)
print("Model sparsity:\n")
print_model_weights_sparsity(stripped_clustered_model)
print("\nModel clusters:\n")
print_model_weight_clusters(stripped_clustered_model)
Explanation: Strip the clustering wrapper first, then check that the model is correctly pruned and clustered.
End of explanation
# QAT
qat_model = tfmot.quantization.keras.quantize_model(stripped_clustered_model)
qat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train qat model:')
qat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
# PCQAT
quant_aware_annotate_model = tfmot.quantization.keras.quantize_annotate_model(
stripped_clustered_model)
pcqat_model = tfmot.quantization.keras.quantize_apply(
quant_aware_annotate_model,
tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme(preserve_sparsity=True))
pcqat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train pcqat model:')
pcqat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
print("QAT Model clusters:")
print_model_weight_clusters(qat_model)
print("\nQAT Model sparsity:")
print_model_weights_sparsity(qat_model)
print("\nPCQAT Model clusters:")
print_model_weight_clusters(pcqat_model)
print("\nPCQAT Model sparsity:")
print_model_weights_sparsity(pcqat_model)
Explanation: Apply QAT and PCQAT and check effect on model clusters and sparsity
Next, apply both QAT and PCQAT on the sparse clustered model and observe that PCQAT preserves weight sparsity and clusters in your model. Note that the stripped model is passed to the QAT and PCQAT API.
End of explanation
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in kilobytes.
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)/1000
Explanation: See compression benefits of PCQAT model
Define helper function to get zipped model file.
End of explanation
# QAT model
converter = tf.lite.TFLiteConverter.from_keras_model(qat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
qat_tflite_model = converter.convert()
qat_model_file = 'qat_model.tflite'
# Save the model.
with open(qat_model_file, 'wb') as f:
f.write(qat_tflite_model)
# PCQAT model
converter = tf.lite.TFLiteConverter.from_keras_model(pcqat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
pcqat_tflite_model = converter.convert()
pcqat_model_file = 'pcqat_model.tflite'
# Save the model.
with open(pcqat_model_file, 'wb') as f:
f.write(pcqat_tflite_model)
print("QAT model size: ", get_gzipped_model_size(qat_model_file), ' KB')
print("PCQAT model size: ", get_gzipped_model_size(pcqat_model_file), ' KB')
Explanation: Observe that applying sparsity, clustering and PCQAT to a model yields significant compression benefits.
End of explanation
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print(f"Evaluated on {i} results so far.")
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
Explanation: See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
End of explanation
interpreter = tf.lite.Interpreter(pcqat_model_file)
interpreter.allocate_tensors()
pcqat_test_accuracy = eval_model(interpreter)
print('Pruned, clustered and quantized TFLite test_accuracy:', pcqat_test_accuracy)
print('Baseline TF test accuracy:', baseline_model_accuracy)
Explanation: Evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend.
End of explanation |
15,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 7 - Sets
This chapter will introduce a different kind of container
Step1: Curly brackets surround sets, and commas separate the elements in the set
A set can be empty (use set() to create it)
Sets do not allow duplicates
sets are unordered (the order in which you add items is not important)
A set can only contain immutable objects (for now that means only strings and integers can be added)
A set can not contain mutable objects, hence no lists or sets
Please note that sets do not allow duplicates. In the example below, the integer 1 will only be present once in the set.
Step2: Please note that sets are unordered. This means that it can occur that if you print a set, it looks different than how you created it
Step3: This also means that you can check if two sets are the same even if you don't know the order in which items were put in
Step4: Please note that sets can only contain immutable objects. Hence the following examples will work, since we are adding immutable objects
Step5: But the following example will result in an error, since we are trying to create a set with a mutable object
Step6: 2. How to add items to a set
The most common way of adding an item to a set is by using the add method. The add method has one positional parameter, namely what you are going to add to the set, and it returns None.
Step7: 3. How to extract/inspect items in a set
When you use sets, you usually want to compare the elements of different sets, for instance, to determine how much overlap there is or how many of the items in set1 are not members of set2. Sets can be used to carry out mathematical set operations like union, intersection, difference, and symmetric difference. Please take a look at this website if you prefer a more visual and more complete explanation.
You can ask Python to show you all the set methods by using dir. All the methods that do not start with '__' are relevant for you.
Step8: You observe that there are many methods defined for sets! Here we explain the two most common methods. We start with the union method.
Step9: Python shows dots (...) for the parameters of the union method. Based on the docstring, we learn that we can provide any number of sets, and Python will return the union of them.
Step10: The intersection method has works in a similar manner as the union method, but returns a new set containing only the intersection of the sets.
Step11: Since sets are unordered, you can not use an index to extract an element from a set.
Step12: 4. Using built-in functions on sets
The same range of functions that operate on lists also work with sets. We can easily get some simple calculations done with these functions
Step13: 5. An overview of set operations
There are many more operations which we can perform on sets. Here is an overview of some of them.
In order to get used to them, please call the help function on each of them (e.g., help(set.union)). This will give you the information about the positional parameters, keyword parameters, and what is returned by the method.
Step14: Before diving into some exercises, you may want to the dir built-in function again to see an overview of all set methods
Step15: Exercises
Exercise 1 | Python Code:
a_set = {1, 2, 3}
a_set
empty_set = set() # you have to use set() to create an empty set! (we will see why later)
print(empty_set)
Explanation: Chapter 7 - Sets
This chapter will introduce a different kind of container: sets. Sets are unordered lists with no duplicate entries. You might wonder why we need different types of containers. We will postpone that discussion until chapter 8.
At the end of this chapter, you will be able to:
* create a set
* add items to a set
* extract/inspect items in a set
If you want to learn more about these topics, you might find the following links useful:
* Python documentation
* A tutorial on sets
If you have questions about this chapter, please contact us (cltl.python.course@gmail.com).
1. How to create a set
It's quite simple to create a set.
End of explanation
a_set = {1, 2, 1, 1}
print(a_set)
Explanation: Curly brackets surround sets, and commas separate the elements in the set
A set can be empty (use set() to create it)
Sets do not allow duplicates
sets are unordered (the order in which you add items is not important)
A set can only contain immutable objects (for now that means only strings and integers can be added)
A set can not contain mutable objects, hence no lists or sets
Please note that sets do not allow duplicates. In the example below, the integer 1 will only be present once in the set.
End of explanation
a_set = {1, 3, 2}
print(a_set)
Explanation: Please note that sets are unordered. This means that it can occur that if you print a set, it looks different than how you created it
End of explanation
{1, 2, 3} == {2, 3, 1}
Explanation: This also means that you can check if two sets are the same even if you don't know the order in which items were put in:
End of explanation
a_set = {1, 'a'}
print(a_set)
Explanation: Please note that sets can only contain immutable objects. Hence the following examples will work, since we are adding immutable objects
End of explanation
a_set = {1, []}
Explanation: But the following example will result in an error, since we are trying to create a set with a mutable object
End of explanation
a_set = set()
a_set.add(1)
print(a_set)
a_set = set()
a_set = a_set.add(1)
print(a_set)
Explanation: 2. How to add items to a set
The most common way of adding an item to a set is by using the add method. The add method has one positional parameter, namely what you are going to add to the set, and it returns None.
End of explanation
dir(set)
Explanation: 3. How to extract/inspect items in a set
When you use sets, you usually want to compare the elements of different sets, for instance, to determine how much overlap there is or how many of the items in set1 are not members of set2. Sets can be used to carry out mathematical set operations like union, intersection, difference, and symmetric difference. Please take a look at this website if you prefer a more visual and more complete explanation.
You can ask Python to show you all the set methods by using dir. All the methods that do not start with '__' are relevant for you.
End of explanation
help(set.union)
Explanation: You observe that there are many methods defined for sets! Here we explain the two most common methods. We start with the union method.
End of explanation
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_union = set1.union(set2)
print(the_union)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 6, 7, 8, 9}
the_union = set1.union(set2, set3)
print(the_union)
Explanation: Python shows dots (...) for the parameters of the union method. Based on the docstring, we learn that we can provide any number of sets, and Python will return the union of them.
End of explanation
help(set.intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_intersection = set1.intersection(set2)
print(the_intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 8, 9, 10}
the_intersection = set1.intersection(set2, set3)
print(the_intersection)
Explanation: The intersection method has works in a similar manner as the union method, but returns a new set containing only the intersection of the sets.
End of explanation
a_set = set()
a_set.add(1)
a_set.add(2)
a_set[0]
Explanation: Since sets are unordered, you can not use an index to extract an element from a set.
End of explanation
nums = {3, 41, 12, 9, 74, 15}
print(len(nums)) # number of items in a set
print(max(nums)) # highest value in a set
print(min(nums)) # lowest value in a set
print(sum(nums)) # sum of all values in a set
Explanation: 4. Using built-in functions on sets
The same range of functions that operate on lists also work with sets. We can easily get some simple calculations done with these functions:
End of explanation
set_a = {1, 2, 3}
set_b = {4, 5, 6}
an_element = 4
print(set_a)
#do some operations
set_a.add(an_element) # Add an_element to set_a
print(set_a)
set_a.update(set_b) # Add the elements of set_b to set_a
print(set_a)
set_a.pop() # Remove and return an arbitrary set element. How does this compare to the list method pop?
print(set_a)
set_a.remove(an_element) # Remove an_element from set_a
print(set_a)
Explanation: 5. An overview of set operations
There are many more operations which we can perform on sets. Here is an overview of some of them.
In order to get used to them, please call the help function on each of them (e.g., help(set.union)). This will give you the information about the positional parameters, keyword parameters, and what is returned by the method.
End of explanation
dir(set)
Explanation: Before diving into some exercises, you may want to the dir built-in function again to see an overview of all set methods:
End of explanation
set_1 = {'just', 'some', 'words'}
set_2 = {'some', 'other', 'words'}
# your code here
Explanation: Exercises
Exercise 1:
Please create an empty set and use the add method to add four items to it: 'a', 'set', 'is', 'born'
Exercise 2:
Please use a built-in method to count how many items your set has
Exercise 3:
How would you remove one item from the set?
Exercise 4:
Please check which items are in both sets:
End of explanation |
15,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler
Step1: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step2: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step3: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step4: Finally used your count_letters function to solve the original question. | Python Code:
def ones(one,count):
if one == 1 or one == 2 or one == 6:
count += 3
if one == 4 or one == 5 or one == 9:
count += 4
if one == 3 or one == 7 or one == 8:
count += 5
return count
def teens(teen,count):
if teen == 10:
count += 3
if teen == 11 or teen == 12:
count += 6
if teen == 15 or teen == 16:
count += 7
if teen == 13 or teen == 14 or teen == 18 or teen == 19:
count += 8
if teen == 17:
count += 9
return count
def tens(ten,count):
b = str(ten)
if b[0] == '4' or b[0] == '5' or b[0] == '6':
count += 5
one = int(b[1])
count = ones(one,count)
if b[0] == '2' or b[0] == '3' or b[0] == '8' or b[0] == '9' and b[1]:
count += 6
one = int(b[1])
count = ones(one,count)
if b[0] == '7' and b[1]:
count += 7
one = int(b[1])
count = ones(one,count)
return count
def huns(hun,count):
count += 7
a = str(hun)
b = int(a[0])
count = ones(b,count)
return count
def numberlettercounts(nummin,nummax):
nums = []
for i in range(nummin,nummax+1):
nums.append(i)
count = 0
for num in nums:
a = str(num)
if len(a) == 1:
count = ones(num,count)
if len(a) == 2 and a[0] == '1':
count = teens(num,count)
if len(a) == 2 and a[0] != '1':
count = tens(num,count)
if len(a) == 3 and a[1] == '0' and a[2]=='0':
count = huns(num,count)
if len(a) == 3 and a[1] != '0' and a[2] == '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1':
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1':
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] != '0' and a[2] != '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1' :
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1' :
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] == '0' and a[2] != '0':
count = huns(num,count)
count += 3 #for 'and'
c = int(a[2])
count = ones(c,count)
if len(a) == 4:
count += 11
print (count)
numberlettercounts(1,1000)
def number_to_words(n, join = True):
units = ['','one','two','three','four','five','six','seven','eight','nine']
teens = ['','eleven','twelve','thirteen','fourteen','fifteen','sixteen', \
'seventeen','eighteen','nineteen']
tens = ['','ten','twenty','thirty','forty','fifty','sixty','seventy', \
'eighty','ninety']
thousands = ['','thousand']
words = []
if n==0: words.append('zero')
else:
nStr = '%d'%n
nStrLen = len(nStr)
groups = (nStrLen+2)/3
nStr = nStr.zfill(int(groups)*3)
for i in range(0,int(groups)*3,3):
x,y,z = int(nStr[i]),int(nStr[i+1]),int(nStr[i+2])
g = int(groups)-(i/3+1)
if x>=1:
words.append(units[x])
words.append('hundred')
if y>1:
words.append(tens[y])
if z>=1: words.append(units[z])
elif y==1:
if z>=1: words.append(teens[z])
else: words.append(tens[y])
else:
if z>=1: words.append(units[z])
if (int(g)>=1) and ((int(x)+int(y)+int(z))>0): words.append(thousands[int(g)])
if join: return ' '.join(words)
return words
number_to_words(999)
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
number_to_words(999)
expected ='nine hundred ninety nine'
number_to_words(0)
expected2 ='zero'
number_to_words(1000)
expected3 ='one thousand'
number_to_words(5)
expected4 ='five'
assert (number_to_words(999) == expected)
assert (number_to_words(0) == expected2)
assert (number_to_words(1000) == expected3)
assert (number_to_words(5) == expected4)
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def ones(one,count):
if one == 1 or one == 2 or one == 6:
count += 3
if one == 4 or one == 5 or one == 9:
count += 4
if one == 3 or one == 7 or one == 8:
count += 5
return count
def teens(teen,count):
if teen == 10:
count += 3
if teen == 11 or teen == 12:
count += 6
if teen == 15 or teen == 16:
count += 7
if teen == 13 or teen == 14 or teen == 18 or teen == 19:
count += 8
if teen == 17:
count += 9
return count
def tens(ten,count):
b = str(ten)
if b[0] == '4' or b[0] == '5' or b[0] == '6':
count += 5
one = int(b[1])
count = ones(one,count)
if b[0] == '2' or b[0] == '3' or b[0] == '8' or b[0] == '9' and b[1]:
count += 6
one = int(b[1])
count = ones(one,count)
if b[0] == '7' and b[1]:
count += 7
one = int(b[1])
count = ones(one,count)
return count
def huns(hun,count):
count += 7
a = str(hun)
b = int(a[0])
count = ones(b,count)
return count
#def count_letters(n): <--I didn't use this...
def count_letters(nummin,nummax):
nums = []
for i in range(nummin,nummax+1):
nums.append(i)
count = 0
for num in nums:
a = str(num)
if len(a) == 1:
count = ones(num,count)
if len(a) == 2 and a[0] == '1':
count = teens(num,count)
if len(a) == 2 and a[0] != '1':
count = tens(num,count)
if len(a) == 3 and a[1] == '0' and a[2]=='0':
count = huns(num,count)
if len(a) == 3 and a[1] != '0' and a[2] == '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1':
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1':
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] != '0' and a[2] != '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1' :
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1' :
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] == '0' and a[2] != '0':
count = huns(num,count)
count += 3 #for 'and'
c = int(a[2])
count = ones(c,count)
if len(a) == 4:
count += 11
return (count)
count_letters(0,342)
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
expected1=3
assert(count_letters(0,1) == expected1)
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
count_letters(1,1000)
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
15,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will
Step1: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
Step2: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: We will be using the same 4 categorical features as in the previous assignment
Step4: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed = 1 so everyone gets the same results.
Step5: Note
Step6: The feature columns now look like this
Step7: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
Step8: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture
Step9: Quiz question
Step10: Quiz question
Step11: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
Step12: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
Step13: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2
Step14: Here is a function to count the nodes in your tree
Step15: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step16: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning
Step17: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
Step18: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
Step19: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
Step20: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step21: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
Step22: Quiz question
Step23: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
Step24: Now, evaluate the validation error using my_decision_tree_old.
Step25: Quiz question
Step26: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data
Step27: Now evaluate the classification error on the validation data.
Quiz Question
Step28: Compute the number of nodes in model_1, model_2, and model_3.
Quiz question
Step29: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
Step30: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
Quiz Question | Python Code:
import graphlab
Explanation: Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will:
Implement binary decision trees with different early stopping methods.
Compare models with different stopping parameters.
Visualize the concept of overfitting in decision trees.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
loans = graphlab.SFrame('lending-club-data.gl/')
Explanation: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
End of explanation
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
Explanation: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
Explanation: We will be using the same 4 categorical features as in the previous assignment:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
In the dataset, each of these features is a categorical feature. Since we are building a binary decision tree, we will have to convert this to binary data in a subsequent section using 1-hot encoding.
End of explanation
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed = 1 so everyone gets the same results.
End of explanation
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
Since we are implementing binary decision trees, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion:
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership': 'RENT'}
we want to turn this into three features:
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
End of explanation
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
Explanation: The feature columns now look like this:
End of explanation
train_data, validation_set = loans_data.random_split(.8, seed=1)
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
## YOUR CODE HERE
Explanation: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture:
Reached a maximum depth. (set by parameter max_depth).
Reached a minimum node size. (set by parameter min_node_size).
Don't split if the gain in error reduction is too small. (set by parameter min_error_reduction).
For the rest of this assignment, we will refer to these three as early stopping conditions 1, 2, and 3.
Early stopping condition 1: Maximum depth
Recall that we already implemented the maximum depth stopping condition in the previous assignment. In this assignment, we will experiment with this condition a bit more and also write code to implement the 2nd and 3rd early stopping conditions.
We will be reusing code from the previous assignment and then building upon this. We will alert you when you reach a function that was part of the previous assignment so that you can simply copy and past your previous code.
Early stopping condition 2: Minimum node size
The function reached_minimum_node_size takes 2 arguments:
The data (from a node)
The minimum number of data points that a node is allowed to split on, min_node_size.
This function simply calculates whether the number of data points at a given node is less than or equal to the specified minimum node size. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
## YOUR CODE HERE
Explanation: Quiz question: Given an intermediate node with 6 safe loans and 3 risky loans, if the min_node_size parameter is 10, what should the tree learning algorithm do next?
Early stopping condition 3: Minimum gain in error reduction
The function error_reduction takes 2 arguments:
The error before a split, error_before_split.
The error after a split, error_after_split.
This function computes the gain in error reduction, i.e., the difference between the error before the split and that after the split. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
## YOUR CODE HERE
# Count the number of -1's (risky loans)
## YOUR CODE HERE
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
Explanation: Quiz question: Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the minimum gain in error reduction parameter is set to 0.2, what should the tree learning algorithm do next?
Grabbing binary decision tree helper functions from past assignment
Recall from the previous assignment that we wrote a function intermediate_node_num_mistakes that calculates the number of misclassified examples when predicting the majority class. This is used to help determine which feature is best to split on at a given node of the tree.
Please copy and paste your code for intermediate_node_num_mistakes here.
End of explanation
def best_splitting_feature(data, features, target):
target_values = data[target]
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split =
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes =
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes =
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error =
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
return best_feature # Return the best feature we found
Explanation: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
End of explanation
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = ## YOUR CODE HERE
else:
leaf['prediction'] = ## YOUR CODE HERE
# Return the leaf node
return leaf
Explanation: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
End of explanation
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if ## YOUR CODE HERE
print "Early stopping condition 2 reached. Reached minimum node size."
return ## YOUR CODE HERE
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = ## YOUR CODE HERE
right_mistakes = ## YOUR CODE HERE
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if ## YOUR CODE HERE
print "Early stopping condition 3 reached. Minimum error reduction."
return ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree =
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2: minimum node size:
Step 1: Use the function reached_minimum_node_size that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the min_node_size argument.
Step 2: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Implementing early stopping condition 3: minimum error reduction:
Note: This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction.
Step 1: Calculate the classification error before splitting. Recall that classification error is defined as:
$$
\text{classification error} = \frac{\text{# mistakes}}{\text{# total examples}}
$$
* Step 2: Calculate the classification error after splitting. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples.
* Step 3: Use the function error_reduction to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (min_error_reduction). Don't forget to use that argument.
* Step 4: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a function to count the nodes in your tree:
End of explanation
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 5'
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
Explanation: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning: This code block may take a minute to learn.
End of explanation
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
Explanation: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
End of explanation
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
### YOUR CODE HERE
Explanation: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
End of explanation
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
Explanation: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
End of explanation
classify(my_decision_tree_new, validation_set[0], annotate = True)
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
classify(my_decision_tree_old, validation_set[0], annotate = True)
Explanation: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
End of explanation
def evaluate_classification_error(tree, data):
# Apply classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the prediction, calculate the classification error
## YOUR CODE HERE
Explanation: Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for validation_set[0] shorter, longer, or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for any point always shorter, always longer, always the same, shorter or the same, or longer or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For a tree trained on any dataset using max_depth = 6, min_node_size = 100, min_error_reduction=0.0, what is the maximum number of splits encountered while making a single prediction?
Evaluating the model
Now let us evaluate the model that we have trained. You implemented this evautation in the function evaluate_classification_error from the previous assignment.
Please copy and paste your evaluate_classification_error code here.
End of explanation
evaluate_classification_error(my_decision_tree_new, validation_set)
Explanation: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
End of explanation
evaluate_classification_error(my_decision_tree_old, validation_set)
Explanation: Now, evaluate the validation error using my_decision_tree_old.
End of explanation
model_1 =
model_2 =
model_3 =
Explanation: Quiz question: Is the validation error of the new decision tree (using early stopping conditions 2 and 3) lower than, higher than, or the same as that of the old decision tree from the previous assignment?
Exploring the effect of max_depth
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (too small, just right, and too large).
Train three models with these parameters:
model_1: max_depth = 2 (too small)
model_2: max_depth = 6 (just right)
model_3: max_depth = 14 (may be too large)
For each of these three, we set min_node_size = 0 and min_error_reduction = -1.
Note: Each tree can take up to a few minutes to train. In particular, model_3 will probably take the longest to train.
End of explanation
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
Explanation: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data:
End of explanation
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
Explanation: Now evaluate the classification error on the validation data.
Quiz Question: Which tree has the smallest error on the validation data?
Quiz Question: Does the tree with the smallest error in the training data also have the smallest error in the validation data?
Quiz Question: Is it always true that the tree with the lowest classification error on the training set will result in the lowest classification error in the validation set?
Measuring the complexity of the tree
Recall in the lecture that we talked about deeper trees being more complex. We will measure the complexity of the tree as
complexity(T) = number of leaves in the tree T
Here, we provide a function count_leaves that counts the number of leaves in a tree. Using this implementation, compute the number of nodes in model_1, model_2, and model_3.
End of explanation
model_4 =
model_5 =
model_6 =
Explanation: Compute the number of nodes in model_1, model_2, and model_3.
Quiz question: Which tree has the largest complexity?
Quiz question: Is it always true that the most complex tree will result in the lowest classification error in the validation_set?
Exploring the effect of min_error
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (negative, just right, and too positive).
Train three models with these parameters:
1. model_4: min_error_reduction = -1 (ignoring this early stopping condition)
2. model_5: min_error_reduction = 0 (just right)
3. model_6: min_error_reduction = 5 (too positive)
For each of these three, we set max_depth = 6, and min_node_size = 0.
Note: Each tree can take up to 30 seconds to train.
End of explanation
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
Explanation: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
End of explanation
model_7 =
model_8 =
model_9 =
Explanation: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
Quiz Question: Using the complexity definition above, which model (model_4, model_5, or model_6) has the largest complexity?
Did this match your expectation?
Quiz Question: model_4 and model_5 have similar classification error on the validation set but model_5 has lower complexity? Should you pick model_5 over model_4?
Exploring the effect of min_node_size
We will compare three models trained with different values of the stopping criterion. Again, intentionally picked models at the extreme ends (too small, just right, and just right).
Train three models with these parameters:
1. model_7: min_node_size = 0 (too small)
2. model_8: min_node_size = 2000 (just right)
3. model_9: min_node_size = 50000 (too large)
For each of these three, we set max_depth = 6, and min_error_reduction = -1.
Note: Each tree can take up to 30 seconds to train.
End of explanation |
15,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Objektorientiere Programmierung
Step1: Wie wir sehen, ist die Eigenschaft _val durchaus von außerhalb verfügbar. Allerdings signalisiert das Underline, dass vom Programmierer der Klasse nicht vorgesehen ist, dass dieser Wert direkt verwendet wird (sondern z.B. nur über die Methoden get_val() und set_val()). Wenn ein anderer Programmierer der Meinung ist, dass er direkten Zugriff auf die Eigenschaft _val braucht, liegt das in seiner Verantwortung (wird aber von Python nicht unterbunden). Man spricht hier von protection by convention. Python-Programmierer halten sich in aller Regel an diese Konvention, weshalb dieser Art von "Schutz" weit verbreitet ist.
Unsichtbare Eigenschaften und Methoden
Für paranoide Programmierer bietet Python die Möglichkeit, den Zugriff von außerhalb des Objekt komplett zu unterbinden, indem man statt eines Unterstrichts zwei Unterstriche vor den Namen setzt.
Step2: Hier sehen wir, dass die Eigenschaft __val von außerhalb der Klasse gar nicht sichtbar und damit auch nicht veränderbar ist. Innerhalb der Klasse ist sie jedoch normal verfügbar. Das kann zu Problemen führen
Step3: Da __val nur innerhalb der Basisklasse angelegt wurde, hat die abgeleitete Klasse keinen Zugriff darauf.
Datenkapelung mit Properties
Wie wir gesehen haben, werden für den Zugriff auf geschützte Eigenschaften eigene Getter- und Setter-Methoden geschrieben, über die der Wert einer Eigenschaft kontrolliert verändert werden kann. Programmieren wir eine Student-Klasse, in der eine Note gespeichert werden soll. Um den Zugriff auf diese Eigenschaft zu kontrollieren, schreiben wir eine Setter- und eine Getter-Methode.
Step4: Wir können jetzt die Note setzen und auslesen
Step5: Allerdings ist der direkte Zugriff auf grade immer noch möglich
Step6: Wie wir bereits gesehen haben, können wir das verhindern, indem wir die Eigenschaft grade auf __grade umbenennen.
Properties setzen via Getter und Setter
Python bietet eine Möglichkeit, das Setzen und Auslesen von Objekteigenschaften automatisch durch Methoden zu leiten. Dazu werden der Getter und Setter an die poperty-Funktion übergeben (letzte Zeile der Klasse).
Step7: Wie wir sehen, können wir die Eigenschaft des Objekts direkt setzen und auslesen, der Zugriff wird aber von Python jeweils durch den Setter und Getter geleitet.
Wenn wir nur eine Methode (den Getter) als Argument an die property()-Funktion übergeben, haben wir eine Eigenschaft, die sich nur auslesen, aber nicht verändern lässt.
Step8: Wir können also auf unsere via property() definierte Eigenschaften zugreifen. Wir können grade aber nicht verwenden,
um die Eigenschaft zu verändern
Step9: Der @Property-Dekorator
Dekoratoren erweitern dynamisch die Funktionalität von Funktionen indem sie diese (im Hintergrund) in eine weitere Funktion verpacken. Die Anwendung eines Dekorators ist einfach
Step10: Klassenvariablen (Static members)
Wir haben gelernt, dass Klassen Eigenschaften und Methoden von Objekten festlegen. Allerdings (und das kann zu Beginn etwas verwirrend sein), sind Klassen selbst auch Objekte, die Eigenschaften und Methoden haben. Hier ein Beispiel
Step11: Die eine Eigenschaft hängt also am Klassenobjekt, die andere am aus der Klasse erzeugten Objekt. Solche Klassenobjekte können nützlich sein, weil sie in allen aus der Klasse erzeugten Objekten verfügbar sind (sogar via self, solange das Objekt nicht selbst eine gleichnamige Eigenschaft hat
Step12: Man kann das auch so schreiben, wodurch der Counter auch für Subklassen funktioniert | Python Code:
class MyClass:
def __init__(self, val):
self.set_val(val)
def get_val(self):
return self._val
def set_val(self, val):
if val > 0:
self._val = val
else:
raise ValueError('val must be greater 0')
myclass = MyClass(27)
myclass._val
Explanation: Objektorientiere Programmierung: Vertiefung
Diese Notebook vertieft einige Konzepte der objektorientierten Programmierung, insbesondere in Hinblick auf Python.
Geschützte Variablen und Methoden (Kapselung)
Geschützte Variablen und Methoden
Wir haben gelernt, dass einer der wesentlichen Vorteile von Objektorientierung die Datenkapselung ist. Damit ist gemeint, dass der Zugriff auf Eigenschaften und Methoden eingeschränkt werden kann. Manche Programmiersprachen wie z.B. Java markieren diese Zugriffsrechte explizit und sind in der Auslegung sehr strikt. Diese Variablendeklaration in Java beschränkt den Zugriff auf eine Variable auf die Klasse selbst:
~~~
private int score = 0;
~~~
Dadurch kann der Wert von score nur aus der Klasse heraus gelesen oder verändert werden.
~~~
public String username;
~~~
Hingegegen erlaubt den uneingeschränkten Zugriff auf die Eigenschaft username.
Diesen Mechanismus gibt es auch in Python, allerdings geht man hier die Dinge relaxter an: Ein vor einen Variablennamen oder einen Methodennamen gesetztes Underline bedeutet, dass dieser Teil des Objekt von außerhalb des Objekts nicht verwendet, vor allem nicht verändert werden soll.
End of explanation
class MyClass:
def __init__(self, val):
self.__val = val
myclass = MyClass(42)
myclass.__val
Explanation: Wie wir sehen, ist die Eigenschaft _val durchaus von außerhalb verfügbar. Allerdings signalisiert das Underline, dass vom Programmierer der Klasse nicht vorgesehen ist, dass dieser Wert direkt verwendet wird (sondern z.B. nur über die Methoden get_val() und set_val()). Wenn ein anderer Programmierer der Meinung ist, dass er direkten Zugriff auf die Eigenschaft _val braucht, liegt das in seiner Verantwortung (wird aber von Python nicht unterbunden). Man spricht hier von protection by convention. Python-Programmierer halten sich in aller Regel an diese Konvention, weshalb dieser Art von "Schutz" weit verbreitet ist.
Unsichtbare Eigenschaften und Methoden
Für paranoide Programmierer bietet Python die Möglichkeit, den Zugriff von außerhalb des Objekt komplett zu unterbinden, indem man statt eines Unterstrichts zwei Unterstriche vor den Namen setzt.
End of explanation
class MySpecialClass(MyClass):
def get_val(self):
return self.__val
msc = MySpecialClass(42)
msc.get_val()
Explanation: Hier sehen wir, dass die Eigenschaft __val von außerhalb der Klasse gar nicht sichtbar und damit auch nicht veränderbar ist. Innerhalb der Klasse ist sie jedoch normal verfügbar. Das kann zu Problemen führen:
End of explanation
class GradingError(Exception): pass
class Student:
def __init__(self, matrikelnr):
self.matrikelnr = matrikelnr
self._grade = 0
def set_grade(self, grade):
if grade > 0 and grade < 6:
self._grade = grade
else:
raise ValueError('Grade must be between 1 and 5!')
def get_grade(self):
if self._grade > 0:
return self._grade
raise GradingError('Noch nicht benotet!')
Explanation: Da __val nur innerhalb der Basisklasse angelegt wurde, hat die abgeleitete Klasse keinen Zugriff darauf.
Datenkapelung mit Properties
Wie wir gesehen haben, werden für den Zugriff auf geschützte Eigenschaften eigene Getter- und Setter-Methoden geschrieben, über die der Wert einer Eigenschaft kontrolliert verändert werden kann. Programmieren wir eine Student-Klasse, in der eine Note gespeichert werden soll. Um den Zugriff auf diese Eigenschaft zu kontrollieren, schreiben wir eine Setter- und eine Getter-Methode.
End of explanation
anna = Student('01754645')
anna.set_grade(6)
anna.set_grade(2)
anna.get_grade()
Explanation: Wir können jetzt die Note setzen und auslesen:
End of explanation
anna._grade
anna._grade = 6
Explanation: Allerdings ist der direkte Zugriff auf grade immer noch möglich:
End of explanation
class Student:
def __init__(self, matrikelnr):
self.matrikelnr = matrikelnr
self.__grade = 0
def set_grade(self, grade):
if grade > 0 and grade < 6:
self.__grade = grade
else:
raise ValueError('Grade must be between 1 and 5!')
def get_grade(self):
if self.__grade > 0:
return self.__grade
raise GradingError('Noch nicht benotet!')
grade = property(get_grade, set_grade)
otto = Student('01745646465')
otto.grade = 6
Explanation: Wie wir bereits gesehen haben, können wir das verhindern, indem wir die Eigenschaft grade auf __grade umbenennen.
Properties setzen via Getter und Setter
Python bietet eine Möglichkeit, das Setzen und Auslesen von Objekteigenschaften automatisch durch Methoden zu leiten. Dazu werden der Getter und Setter an die poperty-Funktion übergeben (letzte Zeile der Klasse).
End of explanation
class Student:
def __init__(self, matrikelnr, grade):
self.matrikelnr = matrikelnr
self.__grade = grade
def get_grade(self):
if self.__grade > 0:
return self.__grade
raise GradingError('Noch nicht benotet!')
grade = property(get_grade)
albert = Student('0157897846546', 5)
albert.grade
Explanation: Wie wir sehen, können wir die Eigenschaft des Objekts direkt setzen und auslesen, der Zugriff wird aber von Python jeweils durch den Setter und Getter geleitet.
Wenn wir nur eine Methode (den Getter) als Argument an die property()-Funktion übergeben, haben wir eine Eigenschaft, die sich nur auslesen, aber nicht verändern lässt.
End of explanation
albert.grade = 1
Explanation: Wir können also auf unsere via property() definierte Eigenschaften zugreifen. Wir können grade aber nicht verwenden,
um die Eigenschaft zu verändern:
End of explanation
class Student:
def __init__(self, matrikelnr):
self.matrikelnr = matrikelnr
self.__grade = 0
@property
def grade(self):
if self.__grade > 0:
return self.__grade
raise GradingError('Noch nicht benotet!')
@grade.setter
def grade(self, grade):
if grade > 0 and grade < 6:
self.__grade = grade
else:
raise ValueError('Grade must be between 1 and 5!')
hugo = Student('0176464645454')
hugo.grade = 6
hugo.grade = 2
hugo.grade
Explanation: Der @Property-Dekorator
Dekoratoren erweitern dynamisch die Funktionalität von Funktionen indem sie diese (im Hintergrund) in eine weitere Funktion verpacken. Die Anwendung eines Dekorators ist einfach: man schreibt ihn einfach vor die Funktionsdefinition.
Python bringt eine Reihe von Dekoratoren mit, man kann sich aber auch eigene Dekoratoren schreiben, was jedoch hier nicht behandelt wird.
Der in Python eingebaute @property-Dekorator ist eine Alternative zu der oben vorgestellten property()-Funktion:
End of explanation
class MyClass:
the_answer = 42
def __init__(self, val):
self.the_answer = val
MyClass.the_answer
mc = MyClass(17)
print('Objekteigenschaft:', mc.the_answer)
print('Klasseneigenschaft:', MyClass.the_answer)
Explanation: Klassenvariablen (Static members)
Wir haben gelernt, dass Klassen Eigenschaften und Methoden von Objekten festlegen. Allerdings (und das kann zu Beginn etwas verwirrend sein), sind Klassen selbst auch Objekte, die Eigenschaften und Methoden haben. Hier ein Beispiel:
End of explanation
class MyClass:
instance_counter = 0
def __init__(self):
MyClass.instance_counter += 1
print('Ich bin das {}. Objekt'.format(MyClass.instance_counter))
a = MyClass()
b = MyClass()
class MyOtherClass(MyClass):
instance_counter = 0
a = MyOtherClass()
b = MyOtherClass()
Explanation: Die eine Eigenschaft hängt also am Klassenobjekt, die andere am aus der Klasse erzeugten Objekt. Solche Klassenobjekte können nützlich sein, weil sie in allen aus der Klasse erzeugten Objekten verfügbar sind (sogar via self, solange das Objekt nicht selbst eine gleichnamige Eigenschaft hat:
End of explanation
class MyClass:
instance_counter = 0
def __init__(self):
self.__class__.instance_counter += 1
print('Ich bin das {}. Objekt'.format(self.__class__.instance_counter))
a = MyClass()
b = MyClass()
class MyOtherClass(MyClass):
instance_counter = 0
a = MyOtherClass()
b = MyOtherClass()
Explanation: Man kann das auch so schreiben, wodurch der Counter auch für Subklassen funktioniert:
End of explanation |
15,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Analysis
This is the main notebook performing all feature engineering, model selection, training, evaluation etc.
The different steps are
Step1: Step2
load the payloads into memory
Step2: Step3A - feature engineering custom features
We will create our own feature space with features that might be important for this task, this includes
Step3: define a function that makes a feature vector from the payload using the custom features
Step4: Scoring custom features
Score the custom features using the SelectKBest function, then visualize the scores in a graph
to see which features are less significant
Step5: Step3B - Feature engineering using bag of words techniques.
Additional to our custom feature space, we will create 6 more feature spaces using bag-of-words techniques
The following vectorizers below is another way of creating features for text input.
We will test the performance of these techniques independently from our custom features in Step 3A.
We will create vectorizers of these combinations
Step6: 2-Grams features
create a Countvectorizer and TF-IDFvectorizer that uses 2-grams.
Step7: 3-Grams features
Create a Countvectorizer and TF-IDFvectorizer that uses 3-grams
Step8: Step3C - Feature space visualization
After creating our different feature spaces to later train each classifier on,
we first examine them visually by projecting the feature spaces into two dimensions using Principle Component Analysis
Graphs are shown below displaying the data in 3 out of 7 of our feature spaces
Step9: 1-Grams CountVectorizer feature space visualization
Step10: 3-Grams TFIDFVectorizer feature space visualization
Step11: Custom feature space visualization
Step12: Step4 - Model selection and evaluation
First, we will automate hyperparameter tuning and out of sample testing using train_model below
Step13: Then, we will use the train_model function to train, optimize and retrieve out of sample testing results from a range of classifiers.
Classifiers tested using our custom feature space
Step14: Make dictionary of models with parameters to optimize using custom feature spaces
Step15: Create a new result table
Step16: Use the 6 different feature spaces generated from the vectorizers previously above,
and train every classifier in classifier_inputs in every feature space
P.S! Don't try to run this, it will take several days to complete
Instead skip to Step4B
Step17: Use our custom feature space,
and train every classifier in classifier_inputs_custom with
P.S! Don't try to run this, it will take many hours to complete
Instead skip to Step4B
Step18: Classifier results
Step19: F1-score
Calculate F1-score of each classifier and add to classifiers table
(We didn't implement this in the train_model function as with the other performance metrics because we've already done a 82 hour training session before this and didn't want to re-run the entire training just to add F1-score from inside train_model)
Step20: Final formating
Convert numeric columns to float
Round numeric columns to 4 decimals
Step21: Export classifiers
First, export full list of trained classifiers for later use
Second, pick one classifier to save in a separate pickle, used later to implement in a dummy server
Step22: Step4B - load pre-trained classifiers
Instead of re-training all classifiers, load the classifiers from disk that we have already trained
Step23: Step5 - Visualization
In this section we will visualize
Step24: Learning curves
Create learning curves for a sample of classifiers. This is to visualize how the dataset size impacts the performance
Step25: Three examples of learning curves from the trained classifiers.
All learning curves have upsloping cross-validation score at the end,
which means that adding more data would potentially increase the accuracy
Step26: ROC curves
Plot ROC curves for a range of classifiers to visualize the sensitivity/specificity trade-off and the AUC
Step27: Plot ROC curves for the top3 classifiers and the bottom 3 classifiers, sorted by F1-score
Left
Step28: Step6 - Website integration extract
This is the code needed when implementing the saved classifier in tfidf_2grams_randomforest.p on a server
Step29: (Step7)
we can display which types of queries the classifiers failed to classify. These are interesting to examine for further work on how to improve the classifiers and the quality of the data set | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
import seaborn
import string
from IPython.display import display
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import learning_curve
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.neighbors import NearestNeighbors
from sklearn.neighbors.nearest_centroid import NearestCentroid
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.tree import DecisionTreeClassifier
import sklearn.gaussian_process.kernels as kernels
from sklearn.cross_validation import ShuffleSplit
from sklearn.cross_validation import KFold
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from scipy.stats import expon
Explanation: Data Analysis
This is the main notebook performing all feature engineering, model selection, training, evaluation etc.
The different steps are:
- Step1 - import dependencies
- Step2 - load payloads into memory
- Step3A - Feature engineering custom features
- Step3B - Feature engineering bag-of-words
- Step3C - Feature space visualization
- Step4 - Model selection
- (Step4B - Load pre-trained classifiers)
- Step5 - Visualization
- Step6 - Website integration extract
Step1
import dependencies
End of explanation
payloads = pd.read_csv("data/payloads.csv",index_col='index')
display(payloads.head(30))
Explanation: Step2
load the payloads into memory
End of explanation
def plot_feature_distribution(features):
print('Properties of feature: ' + features.name)
print(features.describe())
f, ax = plt.subplots(1, figsize=(10, 6))
ax.hist(features, bins=features.max()-features.min()+1, normed=1)
ax.set_xlabel('value')
ax.set_ylabel('fraction')
plt.show()
def create_feature_length(payloads):
'''
Feature describing the lengh of the input
'''
payloads['length'] = [len(str(row)) for row in payloads['payload']]
return payloads
payloads = create_feature_length(payloads)
display(payloads.head())
plot_feature_distribution(payloads['length'])
def create_feature_non_printable_characters(payloads):
'''
Feature
Number of non printable characthers within payload
'''
payloads['non-printable'] = [ len([1 for letter in str(row) if letter not in string.printable]) for row in payloads['payload']]
return payloads
create_feature_non_printable_characters(payloads)
display(payloads.head())
plot_feature_distribution(payloads['non-printable'])
def create_feature_punctuation_characters(payloads):
'''
Feature
Number of punctuation characthers within payload
'''
payloads['punctuation'] = [ len([1 for letter in str(row) if letter in string.punctuation]) for row in payloads['payload']]
return payloads
create_feature_punctuation_characters(payloads)
display(payloads.head())
plot_feature_distribution(payloads['punctuation'])
def create_feature_min_byte_value(payloads):
'''
Feature
Minimum byte value in payload
'''
payloads['min-byte'] = [ min(bytearray(str(row), 'utf8')) for row in payloads['payload']]
return payloads
create_feature_min_byte_value(payloads)
display(payloads.head())
plot_feature_distribution(payloads['min-byte'])
def create_feature_max_byte_value(payloads):
'''
Feature
Maximum byte value in payload
'''
payloads['max-byte'] = [ max(bytearray(str(row), 'utf8')) for row in payloads['payload']]
return payloads
create_feature_max_byte_value(payloads)
display(payloads.head())
plot_feature_distribution(payloads['max-byte'])
def create_feature_mean_byte_value(payloads):
'''
Feature
Maximum byte value in payload
'''
payloads['mean-byte'] = [ np.mean(bytearray(str(row), 'utf8')) for row in payloads['payload']]
return payloads
create_feature_mean_byte_value(payloads)
display(payloads.head())
plot_feature_distribution(payloads['mean-byte'].astype(int))
def create_feature_std_byte_value(payloads):
'''
Feature
Standard deviation byte value in payload
'''
payloads['std-byte'] = [ np.std(bytearray(str(row), 'utf8')) for row in payloads['payload']]
return payloads
create_feature_std_byte_value(payloads)
display(payloads.head())
plot_feature_distribution(payloads['std-byte'].astype(int))
def create_feature_distinct_bytes(payloads):
'''
Feature
Number of distinct bytes in payload
'''
payloads['distinct-bytes'] = [ len(list(set(bytearray(str(row), 'utf8')))) for row in payloads['payload']]
return payloads
create_feature_distinct_bytes(payloads)
display(payloads.head())
plot_feature_distribution(payloads['distinct-bytes'])
sql_keywords = pd.read_csv('data/SQLKeywords.txt', index_col=False)
def create_feature_sql_keywords(payloads):
'''
Feature
Number of SQL keywords within payload
'''
payloads['sql-keywords'] = [ len([1 for keyword in sql_keywords['Keyword'] if str(keyword).lower() in str(row).lower()]) for row in payloads['payload']]
return payloads
create_feature_sql_keywords(payloads)
display(type(sql_keywords))
display(payloads.head())
plot_feature_distribution(payloads['sql-keywords'])
js_keywords = pd.read_csv('data/JavascriptKeywords.txt', index_col=False)
def create_feature_javascript_keywords(payloads):
'''
Feature
Number of Javascript keywords within payload
'''
payloads['js-keywords'] = [len([1 for keyword in js_keywords['Keyword'] if str(keyword).lower() in str(row).lower()]) for row in payloads['payload']]
return payloads
create_feature_javascript_keywords(payloads)
display(payloads.head())
plot_feature_distribution(payloads['js-keywords'])
Explanation: Step3A - feature engineering custom features
We will create our own feature space with features that might be important for this task, this includes:
- length of payload
- number of non-printable characters in payload
- number of punctuation characters in payload
- the minimum byte value of payload
- the maximum byte value of payload
- the mean byte value of payload
- the standard deviation of payload byte values
- number of distinct bytes in payload
- number of SQL keywords in payload
- number of javascript keywords in payload
End of explanation
def create_features(payloads):
features = create_feature_length(payloads)
features = create_feature_non_printable_characters(features)
features = create_feature_punctuation_characters(features)
features = create_feature_max_byte_value(features)
features = create_feature_min_byte_value(features)
features = create_feature_mean_byte_value(features)
features = create_feature_std_byte_value(features)
features = create_feature_distinct_bytes(features)
features = create_feature_sql_keywords(features)
features = create_feature_javascript_keywords(features)
del features['payload']
return features
Explanation: define a function that makes a feature vector from the payload using the custom features
End of explanation
Y = payloads['is_malicious']
X = create_features(pd.DataFrame(payloads['payload'].copy()))
test = SelectKBest(score_func=chi2, k='all')
fit = test.fit(X, Y)
# summarize scores
print(fit.scores_)
features = fit.transform(X)
# summarize selected features
# summarize scores
np.set_printoptions(precision=2)
print(fit.scores_)
# Get the indices sorted by most important to least important
indices = np.argsort(fit.scores_)
# To get your top 10 feature names
featuress = []
for i in range(10):
featuress.append(X.columns[indices[i]])
display(featuress)
display([featuress[i] + ' ' + str(fit.scores_[i]) for i in indices[range(10)]])
plt.rcdefaults()
fig, ax = plt.subplots()
y_pos = np.arange(len(featuress))
performance = 3 + 10 * np.random.rand(len(featuress))
error = np.random.rand(len(featuress))
ax.barh(y_pos, fit.scores_[indices[range(10)]], align='center',
color='green', ecolor='black')
ax.set_yticks(y_pos)
ax.set_yticklabels(featuress)
ax.set_xscale('log')
#ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Points')
ax.set_title('SelectKBest()')
plt.show()
Explanation: Scoring custom features
Score the custom features using the SelectKBest function, then visualize the scores in a graph
to see which features are less significant
End of explanation
def get1Grams(payload_obj):
'''Divides a string into 1-grams
Example: input - payload: "<script>"
output- ["<","s","c","r","i","p","t",">"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-1):
ngrams.append(payload[i:i+1])
return ngrams
tfidf_vectorizer_1grams = TfidfVectorizer(tokenizer=get1Grams)
count_vectorizer_1grams = CountVectorizer(min_df=1, tokenizer=get1Grams)
Explanation: Step3B - Feature engineering using bag of words techniques.
Additional to our custom feature space, we will create 6 more feature spaces using bag-of-words techniques
The following vectorizers below is another way of creating features for text input.
We will test the performance of these techniques independently from our custom features in Step 3A.
We will create vectorizers of these combinations:
- 1-grams CountVectorizer
- 2-grams CountVectorizer
- 3-grams CountVectorizer
- 1-grams TfidfVectorizer
- 2-grams TfidfVectorizer
- 3-grams TfidfVectorizer
The type of N-gram function determines how the actual "words" should be created from the payload string
Each vectorizer is used later in Step4 in Pipeline objects before training
See report for further explanation
1-Grams features
create a Countvectorizer and TF-IDFvectorizer that uses 1-grams.
1-grams equals one feature for each letter/symbol recorded
End of explanation
def get2Grams(payload_obj):
'''Divides a string into 2-grams
Example: input - payload: "<script>"
output- ["<s","sc","cr","ri","ip","pt","t>"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-2):
ngrams.append(payload[i:i+2])
return ngrams
tfidf_vectorizer_2grams = TfidfVectorizer(tokenizer=get2Grams)
count_vectorizer_2grams = CountVectorizer(min_df=1, tokenizer=get2Grams)
Explanation: 2-Grams features
create a Countvectorizer and TF-IDFvectorizer that uses 2-grams.
End of explanation
def get3Grams(payload_obj):
'''Divides a string into 3-grams
Example: input - payload: "<script>"
output- ["<sc","scr","cri","rip","ipt","pt>"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-3):
ngrams.append(payload[i:i+3])
return ngrams
tfidf_vectorizer_3grams = TfidfVectorizer(tokenizer=get3Grams)
count_vectorizer_3grams = CountVectorizer(min_df=1, tokenizer=get3Grams)
Explanation: 3-Grams features
Create a Countvectorizer and TF-IDFvectorizer that uses 3-grams
End of explanation
def visualize_feature_space_by_projection(X,Y,title='PCA'):
'''Plot a two-dimensional projection of the dataset in the specified feature space
input: X - data
Y - labels
title - title of plot
'''
pca = TruncatedSVD(n_components=2)
X_r = pca.fit(X).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
plt.figure()
colors = ['blue', 'darkorange']
lw = 2
#Plot malicious and non-malicious separately with different colors
for color, i, y in zip(colors, [0, 1], Y):
plt.scatter(X_r[Y == i, 0], X_r[Y == i, 1], color=color, alpha=.3, lw=lw,
label=i)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.title(title)
plt.show()
Explanation: Step3C - Feature space visualization
After creating our different feature spaces to later train each classifier on,
we first examine them visually by projecting the feature spaces into two dimensions using Principle Component Analysis
Graphs are shown below displaying the data in 3 out of 7 of our feature spaces
End of explanation
X = count_vectorizer_1grams.fit_transform(payloads['payload'])
Y = payloads['is_malicious']
visualize_feature_space_by_projection(X,Y,title='PCA visualization of 1-grams CountVectorizer feature space')
Explanation: 1-Grams CountVectorizer feature space visualization
End of explanation
X = tfidf_vectorizer_3grams.fit_transform(payloads['payload'])
Y = payloads['is_malicious']
visualize_feature_space_by_projection(X,Y,title='PCA visualization of 3-grams TFIDFVectorizer feature space')
Explanation: 3-Grams TFIDFVectorizer feature space visualization
End of explanation
X = create_features(pd.DataFrame(payloads['payload'].copy()))
Y = payloads['is_malicious']
visualize_feature_space_by_projection(X,Y,title='PCA visualization of custom feature space')
Explanation: Custom feature space visualization
End of explanation
def train_model(clf, param_grid, X, Y):
'''Trains and evaluates the model clf from input
The function selects the best model of clf by optimizing for the validation data,
then evaluates its performance using the out of sample test data.
input - clf: the model to train
param_grid: a dict of hyperparameters to use for optimization
X: features
Y: labels
output - the best estimator (trained model)
the confusion matrix from classifying the test data
'''
#First, partition into train and test data
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
n_iter = 5
#If number of possible iterations are less than prefered number of iterations,
#set it to the number of possible iterations
#number of possible iterations are not less than prefered number of iterations if any argument is expon()
#because expon() is continous (writing 100 instead, could be any large number)
n_iter = min(n_iter,np.prod([
100 if type(xs) == type(expon())
else len(xs)
for xs in param_grid.values()
]))
#perform a grid search for the best parameters on the training data.
#Cross validation is made to select the parameters, so the training data is actually split into
#a new train data set and a validation data set, K number of times
cv = ShuffleSplit(n=len(X_train), n_iter=5, test_size=0.2, random_state=0) #DEBUG: n_iter=10
#cv = KFold(n=len(X), n_folds=10)
random_grid_search = RandomizedSearchCV(
clf,
param_distributions=param_grid,
cv=cv,
scoring='f1',
n_iter=n_iter, #DEBUG 1
random_state=5,
refit=True,
verbose=10
)
'''Randomized search used instead. We have limited computing power
grid_search = GridSearchCV(
clf,
param_grid=param_grid,
cv=cv,
scoring='f1', #accuracy/f1/f1_weighted all give same result?
verbose=10,
n_jobs=-1
)
grid_search.fit(X_train, Y_train)
'''
random_grid_search.fit(X_train, Y_train)
#Evaluate the best model on the test data
Y_test_predicted = random_grid_search.best_estimator_.predict(X_test)
Y_test_predicted_prob = random_grid_search.best_estimator_.predict_proba(X_test)[:, 1]
confusion = confusion_matrix(Y_test, Y_test_predicted)
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
#Calculate recall (sensitivity) from confusion matrix
sensitivity = TP / float(TP + FN)
#Calculate specificity from confusion matrix
specificity = TN / float(TN + FP)
#Calculate accuracy
accuracy = (confusion[0][0] + confusion[1][1]) / (confusion.sum().sum())
#Calculate axes of ROC curve
fpr, tpr, thresholds = roc_curve(Y_test, Y_test_predicted_prob)
#Area under the ROC curve
auc = roc_auc_score(Y_test, Y_test_predicted_prob)
return {
'conf_matrix':confusion,
'accuracy':accuracy,
'sensitivity':sensitivity,
'specificity':specificity,
'auc':auc,
'params':random_grid_search.best_params_,
'model':random_grid_search.best_estimator_,
'roc':{'fpr':fpr,'tpr':tpr,'thresholds':thresholds}
}
Explanation: Step4 - Model selection and evaluation
First, we will automate hyperparameter tuning and out of sample testing using train_model below
End of explanation
def create_classifier_inputs_using_vectorizers(vectorizer, subscript):
'''make pipelines of the specified vectorizer with the classifiers to train
input - vectorizer: the vectorizer to add to the pipelines
subscript: subscript name for the dictionary key
output - A dict of inputs to use for train_model(); a pipeline and a dict of params to optimize
'''
classifier_inputs = {}
classifier_inputs[subscript + ' MLPClassifier'] = {
'pipeline':Pipeline([('vect', vectorizer),('clf',MLPClassifier(
activation='relu',
solver='adam',
early_stopping=False,
verbose=True
))]),
'dict_params': {
'vect__min_df':[1,2,5,10,20,40],
'clf__hidden_layer_sizes':[(500,250,125,62)],
'clf__alpha':[0.0005,0.001,0.01,0.1,1],
'clf__learning_rate':['constant','invscaling'],
'clf__learning_rate_init':[0.001,0.01,0.1,1],
'clf__momentum':[0,0.9],
}
}
'''
classifier_inputs[subscript + ' MultinomialNB'] = {
'pipeline':Pipeline([('vect', vectorizer),('clf',MultinomialNB())]),
'dict_params': {
'vect__min_df':[1,2,5,10,20,40]
}
}
classifier_inputs[subscript + ' RandomForest'] = {
'pipeline':Pipeline([('vect', vectorizer),('clf',RandomForestClassifier(
max_depth=None,min_samples_split=2, random_state=0))]),
'dict_params': {
'vect__min_df':[1,2,5,10,20,40],
'clf__n_estimators':[10,20,40,60]
}
}
classifier_inputs[subscript + ' Logistic'] = {
'pipeline':Pipeline([('vect', vectorizer), ('clf',LogisticRegression())]),
'dict_params': {
'vect__min_df':[1,2,5,10,20,40],
'clf__C':[0.001, 0.01, 0.1, 1, 10, 100, 1000]
}
}
classifier_inputs[subscript + ' SVM'] = {
'pipeline':Pipeline([('vect', vectorizer), ('clf',SVC(probability=True))]),
'dict_params': {
'vect__min_df':[1,2,5,10,20,40],
'clf__C':[0.001, 0.01, 0.1, 1, 10, 100, 1000],
'clf__gamma':[0.001, 0.0001,'auto'],
'clf__kernel':['rbf']
}
}
'''
return classifier_inputs
Explanation: Then, we will use the train_model function to train, optimize and retrieve out of sample testing results from a range of classifiers.
Classifiers tested using our custom feature space:
- AdaBoost
- SGD classifier
- MultiLayerPerceptron classifier
- Logistic Regression
- Support Vector Machine
- Random forest
- Decision Tree
- Multinomial Naive Bayes
Classifiers tested using bag-of-words feature spaces:
- MultiLayerPerceptron classifier
- Logistic Regression
- Support Vector Machine
- Random forest
- Multinomial Naive Bayes
Some classifiers were unable to train using a bag-of-words feature space because they couldn't handle sparse graphs
All their best parameters with their performance is stored in a dataframe called classifier_results
Make dictionary of models with parameters to optimize using bag-of-words feature spaces
End of explanation
def create_classifier_inputs(subscript):
classifier_inputs = {}
'''classifier_inputs[subscript + ' GPC'] = {
'pipeline':GaussianProcessClassifier(),
'dict_params': {
'kernel':[
1.0*kernels.RBF(1.0),
1.0*kernels.Matern(),
1.0*kernels.RationalQuadratic(),
1.0*kernels.DotProduct()
]
}
}'''
classifier_inputs[subscript + ' AdaBoostClassifier'] = {
'pipeline':AdaBoostClassifier(n_estimators=100),
'dict_params': {
'n_estimators':[10,20,50, 100],
'learning_rate':[0.1, 0.5, 1.0, 2.0]
}
}
classifier_inputs[subscript + ' SGD'] = {
'pipeline':SGDClassifier(loss="log", penalty="l2"),
'dict_params': {
'learning_rate': ['optimal']
}
}
classifier_inputs[subscript + ' RandomForest'] = {
'pipeline':RandomForestClassifier(
max_depth=None,min_samples_split=2, random_state=0),
'dict_params': {
'n_estimators':[10,20,40,60]
}
}
classifier_inputs[subscript + ' DecisionTree'] = {
'pipeline': DecisionTreeClassifier(max_depth=5),
'dict_params': {
'min_samples_split': [2]
}
}
'''classifier_inputs[subscript + ' MLPClassifier'] = {
'pipeline':MLPClassifier(
activation='relu',
solver='adam',
early_stopping=False,
verbose=True
),
'dict_params': {
'hidden_layer_sizes':[(300, 200, 150, 150), (30, 30, 30), (150, 30, 30, 150),
(400, 250, 100, 100) , (150, 200, 300)],
'alpha':[0.0005,0.001,0.01,0.1,1],
'learning_rate':['constant','invscaling'],
'learning_rate_init':[0.0005,0.001,0.01,0.1,1],
'momentum':[0,0.9],
}
}'''
classifier_inputs[subscript + ' Logistic'] = {
'pipeline':LogisticRegression(),
'dict_params': {
'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]
}
}
classifier_inputs[subscript + ' MultinomialNB'] = {
'pipeline':MultinomialNB(),
'dict_params': {
'alpha': [1.0]
}
}
'''classifier_inputs[subscript + ' SVM'] = {
'pipeline':SVC(probability=True),
'dict_params': {
'C':[0.001, 0.01, 0.1, 1, 10, 100, 1000],
'gamma':[0.001, 0.0001,'auto'],
'kernel':['rbf']
}
}'''
return classifier_inputs
Explanation: Make dictionary of models with parameters to optimize using custom feature spaces
End of explanation
classifier_results = pd.DataFrame(columns=['accuracy','sensitivity','specificity','auc','conf_matrix','params','model','roc'])#,index=classifier_inputs.keys())
Explanation: Create a new result table
End of explanation
classifier_inputs = {}
classifier_inputs.update(create_classifier_inputs_using_vectorizers(count_vectorizer_1grams,'count 1grams'))
classifier_inputs.update(create_classifier_inputs_using_vectorizers(count_vectorizer_2grams,'count 2grams'))
classifier_inputs.update(create_classifier_inputs_using_vectorizers(count_vectorizer_3grams,'count 3grams'))
classifier_inputs.update(create_classifier_inputs_using_vectorizers(tfidf_vectorizer_1grams,'tfidf 1grams'))
classifier_inputs.update(create_classifier_inputs_using_vectorizers(tfidf_vectorizer_2grams,'tfidf 2grams'))
classifier_inputs.update(create_classifier_inputs_using_vectorizers(tfidf_vectorizer_3grams,'tfidf 3grams'))
X = payloads['payload']
Y = payloads['is_malicious']
for classifier_name, inputs in classifier_inputs.items():
display(inputs['dict_params'])
if classifier_name in classifier_results.index.values.tolist():
print('Skipping ' + classifier_name + ', already trained')
else:
result_dict = train_model(inputs['pipeline'],inputs['dict_params'],X,Y)
classifier_results.loc[classifier_name] = result_dict
display(classifier_results)
display(pd.DataFrame(payloads['payload'].copy()))
Explanation: Use the 6 different feature spaces generated from the vectorizers previously above,
and train every classifier in classifier_inputs in every feature space
P.S! Don't try to run this, it will take several days to complete
Instead skip to Step4B
End of explanation
classifier_inputs_custom = {}
#Get classifiers and parameters to optimize
classifier_inputs_custom.update(create_classifier_inputs('custom'))
#Extract payloads and labels
Y = payloads['is_malicious']
X = create_features(pd.DataFrame(payloads['payload'].copy()))
#Select the best features
X_new = SelectKBest(score_func=chi2, k=4).fit_transform(X,Y)
#Call train_model for every classifier and save results to classifier_results
for classifier_name, inputs in classifier_inputs_custom.items():
if classifier_name in classifier_results.index.values.tolist():
print('Skipping ' + classifier_name + ', already trained')
else:
result_dict = train_model(inputs['pipeline'],inputs['dict_params'],X,Y)
classifier_results.loc[classifier_name] = result_dict
display(classifier_results)
#pickle.dump( classifier_results, open( "data/trained_classifiers_custom_all_features.p", "wb" ) )
#Save classifiers in a pickle file to be able to re-use them without re-training
pickle.dump( classifier_results, open( "data/trained_classifiers.p", "wb" ) )
Explanation: Use our custom feature space,
and train every classifier in classifier_inputs_custom with
P.S! Don't try to run this, it will take many hours to complete
Instead skip to Step4B
End of explanation
#Display the results for the classifiers that were trained using our custom feature space
custom_features_classifiers = pickle.load( open("data/trained_classifier_custom_all_features.p", "rb"))
display(custom_features_classifiers)
#Display the results for the classifiers that were using bag of words feature spaces
classifier_results = pickle.load( open( "data/trained_classifiers.p", "rb" ) )
display(classifier_results)
#Combine the two tables into one table
classifier_results = classifier_results.append(custom_features_classifiers)
classifier_results = classifier_results.sort_values(['sensitivity','accuracy'], ascending=[False,False])
display(classifier_results)
Explanation: Classifier results
End of explanation
def f1_score(conf_matrix):
precision = conf_matrix[0][0] / (conf_matrix[0][0] + conf_matrix[0][1] )
recall = conf_matrix[0][0] / (conf_matrix[0][0] + conf_matrix[1][0] )
return (2 * precision * recall) / (precision + recall)
#load classifier table if not yet loaded
classifier_results = pickle.load( open( "data/trained_classifiers.p", "rb" ) )
#Calculate F1-scores
classifier_results['F1-score'] = [ f1_score(conf_matrix) for conf_matrix in classifier_results['conf_matrix']]
#Re-arrange columns
classifier_results = classifier_results[['F1-score','accuracy','sensitivity','specificity','auc','conf_matrix','params','model','roc']]
#re-sort on F1-score
classifier_results = classifier_results.sort_values(['F1-score','accuracy'], ascending=[False,False])
display(classifier_results)
Explanation: F1-score
Calculate F1-score of each classifier and add to classifiers table
(We didn't implement this in the train_model function as with the other performance metrics because we've already done a 82 hour training session before this and didn't want to re-run the entire training just to add F1-score from inside train_model)
End of explanation
classifier_results[['F1-score','accuracy','sensitivity','specificity','auc']] = classifier_results[['F1-score','accuracy','sensitivity','specificity','auc']].apply(pd.to_numeric)
classifier_results = classifier_results.round({'F1-score':4,'accuracy':4,'sensitivity':4,'specificity':4,'auc':4})
#classifier_results[['F1-score','accuracy','sensitivity','specificity','auc','conf_matrix','params']].to_csv('data/classifiers_result_table.csv')
display(classifier_results.dtypes)
Explanation: Final formating
Convert numeric columns to float
Round numeric columns to 4 decimals
End of explanation
#save complete list of classifiers to 'trained_classifiers'
pickle.dump( classifier_results, open( "data/trained_classifiers.p", "wb" ) )
#In this case, we are going to implement tfidf 2grams RandomForest in our dummy server
classifier = (custom_features_classifiers['model'].iloc[0])
print(classifier)
#Save classifiers in a pickle file to be able to re-use them without re-training
pickle.dump( classifier, open( "data/tfidf_2grams_randomforest.p", "wb" ) )
Explanation: Export classifiers
First, export full list of trained classifiers for later use
Second, pick one classifier to save in a separate pickle, used later to implement in a dummy server
End of explanation
classifier_results = pickle.load( open( "data/trained_classifiers.p", "rb" ) )
Explanation: Step4B - load pre-trained classifiers
Instead of re-training all classifiers, load the classifiers from disk that we have already trained
End of explanation
def get_classifier_name(index):
'''
Returns the name of the classifier at the given index name
'''
return index.split()[len(index.split())-1]
#Group rows together using same classifier
grouped = classifier_results.groupby(get_classifier_name)
hist_df = pd.DataFrame(columns=['custom','count 1grams','count 2grams','count 3grams','tfidf 1grams','tfidf 2grams','tfidf 3grams'])
for classifier, indices in grouped.groups.items():
#Make a list of feature spaces
feature_spaces = indices.tolist()
feature_spaces = [feature_space.replace(classifier,'') for feature_space in feature_spaces]
feature_spaces = [feature_space.strip() for feature_space in feature_spaces]
#If no result exists, it will stay as 0
hist_df.loc[classifier] = {
'custom':0,
'count 1grams':0,
'count 2grams':0,
'count 3grams':0,
'tfidf 1grams':0,
'tfidf 2grams':0,
'tfidf 3grams':0
}
#Extract F1-score from classifier_results to corrensponding entry in hist_df
for fs in feature_spaces:
hist_df[fs].loc[classifier] = classifier_results['F1-score'].loc[fs + ' ' + classifier]
#Plot the bar plot
f, ax = plt.subplots()
ax.set_ylim([0.989,1])
hist_df.plot(kind='bar', figsize=(12,7), title='F1-score of all models grouped by classifiers', ax=ax, width=0.8)
#Make Avgerage F1-score row and cols for the table and print the table
hist_df_nonzero = hist_df.copy()
hist_df_nonzero[hist_df > 0] = True
hist_df['Avg Feature'] = (hist_df.sum(axis=1) / np.array(hist_df_nonzero.sum(axis=1)))
hist_df_nonzero = hist_df.copy()
hist_df_nonzero[hist_df > 0] = True
hist_df.loc['Avg Classifier'] = (hist_df.sum(axis=0) / np.array(hist_df_nonzero.sum(axis=0)))
hist_df = hist_df.round(4)
display(hist_df)
Explanation: Step5 - Visualization
In this section we will visualize:
- Histogram of classifier performances
- Learning curves
- ROC curves
Performance histogram
First, make a histogram of classifier performance measured by F1-score.
Same classifier using different feature spaces are clustered together in the graph
Also, print the table of F1-scores and computes the averages along the x-axis and y-axis,
e.g. the average F1-score for each classifier, and the average F1-score for each feature space
End of explanation
def plot_learning_curve(df_row,X,Y):
'''Plots the learning curve of a classifier with its parameters
input - df_row: row of classifier_result
X: payload data
Y: labels
'''
#The classifier to plot learning curve for
estimator = df_row['model']
title = 'Learning curves for classifier ' + df_row.name
train_sizes = np.linspace(0.1,1.0,5)
cv = ShuffleSplit(n=len(X), n_iter=3, test_size=0.2, random_state=0)
#plot settings
plt.figure()
plt.title(title)
plt.xlabel("Training examples")
plt.ylabel("Score")
print('learning curve in process...')
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, Y, cv=cv, n_jobs=-1, train_sizes=train_sizes, verbose=0) #Change verbose=10 to print progress
print('Learning curve done!')
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
plt.show()
Explanation: Learning curves
Create learning curves for a sample of classifiers. This is to visualize how the dataset size impacts the performance
End of explanation
#plot learning curve for tfidf 1grams RandomForest
X = payloads['payload']
Y = payloads['is_malicious']
plot_learning_curve(classifier_results.iloc[0],X,Y)
#plot learning curve for count 3grams MultinomialNB
X = payloads['payload']
Y = payloads['is_malicious']
plot_learning_curve(classifier_results.iloc[6],X,Y)
#plot learning curve for custom svm
X = create_features(pd.DataFrame(payloads['payload'].copy()))
Y = payloads['is_malicious']
plot_learning_curve(classifier_results.iloc[5],X,Y)
Explanation: Three examples of learning curves from the trained classifiers.
All learning curves have upsloping cross-validation score at the end,
which means that adding more data would potentially increase the accuracy
End of explanation
def visualize_result(classifier_list):
'''Plot the ROC curve for a list of classifiers in the same graph
input - classifier_list: a subset of classifier_results
'''
f, (ax1, ax2) = plt.subplots(1,2)
f.set_figheight(6)
f.set_figwidth(15)
#Subplot 1, ROC curve
for classifier in classifier_list:
ax1.plot(classifier['roc']['fpr'], classifier['roc']['tpr'])
ax1.scatter(1-classifier['specificity'],classifier['sensitivity'], edgecolor='k')
ax1.set_xlim([0, 1])
ax1.set_ylim([0, 1.0])
ax1.set_title('ROC curve for top3 and bottom3 classifiers')
ax1.set_xlabel('False Positive Rate (1 - Specificity)')
ax1.set_ylabel('True Positive Rate (Sensitivity)')
ax1.grid(True)
#subplot 2, ROC curve zoomed
for classifier in classifier_list:
ax2.plot(classifier['roc']['fpr'], classifier['roc']['tpr'])
ax2.scatter(1-classifier['specificity'],classifier['sensitivity'], edgecolor='k')
ax2.set_xlim([0, 0.3])
ax2.set_ylim([0.85, 1.0])
ax2.set_title('ROC curve for top3 and bottom3 classifiers (Zoomed)')
ax2.set_xlabel('False Positive Rate (1 - Specificity)')
ax2.set_ylabel('True Positive Rate (Sensitivity)')
ax2.grid(True)
#Add further zoom
left, bottom, width, height = [0.7, 0.27, 0.15, 0.15]
ax3 = f.add_axes([left, bottom, width, height])
for classifier in classifier_list:
ax3.plot(classifier['roc']['fpr'], classifier['roc']['tpr'])
ax3.scatter(1-classifier['specificity'],classifier['sensitivity'], edgecolor='k')
ax3.set_xlim([0, 0.002])
ax3.set_ylim([0.983, 1.0])
ax3.set_title('Zoomed even further')
ax3.grid(True)
plt.show()
Explanation: ROC curves
Plot ROC curves for a range of classifiers to visualize the sensitivity/specificity trade-off and the AUC
End of explanation
indices = [0,1,2, len(classifier_results)-1,len(classifier_results)-2,len(classifier_results)-3]
visualize_result([classifier_results.iloc[index] for index in indices])
Explanation: Plot ROC curves for the top3 classifiers and the bottom 3 classifiers, sorted by F1-score
Left: standard scale ROC curve
Right: zoomed in version of same graph, to easier see in the upper right corner
End of explanation
import pickle
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
def get2Grams(payload_obj):
'''Divides a string into 2-grams
Example: input - payload: "<script>"
output- ["<s","sc","cr","ri","ip","pt","t>"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-2):
ngrams.append(payload[i:i+2])
return ngrams
classifier = pickle.load( open("data/tfidf_2grams_randomforest.p", "rb"))
def injection_test(inputs):
variables = inputs.split('&')
values = [ variable.split('=')[1] for variable in variables]
print(values)
return 'MALICIOUS' if classifier.predict(values).sum() > 0 else 'NOT_MALICIOUS'
#test injection_test
display(injection_test("val1=%3Cscript%3Ekiddie"))
Explanation: Step6 - Website integration extract
This is the code needed when implementing the saved classifier in tfidf_2grams_randomforest.p on a server
End of explanation
pipe = Pipeline([('vect', vectorizer), ('clf',LogisticRegression(C=10))])
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
cv = ShuffleSplit(n=len(X_train), n_iter=1, test_size=0.2, random_state=0) #DEBUG: n_iter=10
random_grid_search = RandomizedSearchCV(
pipe,
param_distributions={
'clf__C':[10]
},
cv=cv,
scoring='roc_auc',
n_iter=1,
random_state=5,
refit=True
)
random_grid_search.fit(X_train, Y_train)
#Evaluate the best model on the test data
Y_test_predicted = random_grid_search.best_estimator_.predict(X_test)
#Payloads classified incorrectly
pd.options.display.max_colwidth = 200
print('False positives')
print(X_test[(Y_test == 0) & (Y_test_predicted == 1)])
print('False negatives')
print(X_test[(Y_test == 1) & (Y_test_predicted == 0)])
Explanation: (Step7)
we can display which types of queries the classifiers failed to classify. These are interesting to examine for further work on how to improve the classifiers and the quality of the data set
End of explanation |
15,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lme', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-LME
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
15,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This blog post is a three-part series. See part 1 for retrieving the dataset and part 2 for the calculation of similarity between test cases.
In the previous blog post, we've seen how we can calculate the structural (dis-)similarity between test cases based on the invoked production methods. We manually spotted some test cases that were very similar by searching through the whole dataset. This could get very tedious. So in this blog post, I show how we can partly automate the process of identifying groups of similar test cases as well as how we can visualize those groups. The aim is to find test cases that test the same production code but shouldn't do it.
Note
Step1: Our dataset shows the cosine distances between all unit test cases (test_method) respectively to the called production methods. That means if two test cases invoke absolutely the same production methods, the distance is 0. If there are only a few calls to the same production method, the distance is something between 0 and 1 depending on the similar number of those calls. If the test cases call completely different production methods, the distance is 1.
We've spotted some interesting test cases manually and discussed these in detail in the previous blog post. Now we want to do this a little bit more automated. Let's visualize the data first to see what we've achieved so far.
Basic Visualization
A quick way to visualize a distance matrix is using a heat map plot. Fortunately, the library seaborn provides a nice little method that we can use to visualize our matrix distance_df without any hassle. We just remove the labeling for both axes to get a better view of the data.
Step2: This heat map enables us to have a quick look at patterns of the whole data graphically. Black spots show groups of test methods that call the same production methods while light pink colored areas signal disjunct calls to the production code. The ordering of the entries of the heat map is alphabetically by the class names of the test methods and those class names tend to begin with the same prefix (like Comment or Todo).
Let's have a closer look at the upper left corner of the heat map. It shows the ten first entries of the matrix with the test classes AddCommentTest and AddSchedulingDateTest.
Step3: Discussion
The color for the distance of the test methods of the AddCommentTest class is deeply dark. That means that the tests in the test class are absolutely similar regarding their structure. If our goal would be the reduction of test code, we could think about merging some test cases together or using some kind of parametrized test execution to avoid duplications.
In contrast, the AddSchedulingDateTest shows a more diverse coloring
Step4: Next, we plot the now two-dimensional matrix with matplotlib. We colorize all data points according to the name of the test classes (= the first level of distance_df's index). We can achieve this by assigning each type to a number within 0 and 1 (relative_index) and draw a color from a predefined color spectrum (cm.hsv in this case) for each type. With this, each test class gets its own color. This enables us to quickly reason about test classes that belong together structurally.
Step5: Discussion
We now have the visual information about test methods that call similar production code in 2D.
Groups of data points of the same color (like the blue colored ones in the lower middle) show that there is a high cohesion of test methods within the test classes that test the corresponding production code.
Groups of data points with mixed colored data points (like in the upper middle) show test methods from different test classes that call the similar production code.
With this representation, we can have a look at the various groups to check if the groupings are OK or if we have to restructure some test cases because of too much similarity or confusion of responsibilities.
Clustering
Let's quickly find both types of groupings programmatically by using another machine learning technique
Step6: We plot all data points of our distance matrix together with the found members of clusters (components_) in one scatter plot.
Step7: The scatter plot confirms what we've seen as humans with our eyes
Step8: We now can take a look at various metrics like the number of classes that declare those methods (nunique) and the number of cluster members aka test methods (count).
Step9: We can also see which test classes belong to a cluster.
Step10: If we join both DataFrames, we get a nice summary of clusters with test classes we should have a deeper look into.
Step11: For a more actionable representation of our findings, let's print the results in a good old, console-like way. | Python Code:
import pandas as pd
distance_df = pd.read_excel(
"datasets/test_distance_matrix.xlsx",
index_col=[0,1],
header=[0,1])
# show only subset of data
distance_df.iloc[:5,:2]
Explanation: Introduction
This blog post is a three-part series. See part 1 for retrieving the dataset and part 2 for the calculation of similarity between test cases.
In the previous blog post, we've seen how we can calculate the structural (dis-)similarity between test cases based on the invoked production methods. We manually spotted some test cases that were very similar by searching through the whole dataset. This could get very tedious. So in this blog post, I show how we can partly automate the process of identifying groups of similar test cases as well as how we can visualize those groups. The aim is to find test cases that test the same production code but shouldn't do it.
Note: Albeit we use a kind of artificial dataset based on pure unit tests (for simplicity reasons), this data analysis is a very powerful way for spotting test duplications of long-running end-to-end-tests that were written e.g. with the Selenium browser automation framework.
Dataset
Let's first read in the data from the previous (dis-)similarity calculation with Pandas and have a look at it. Because we have a dataset with multi-level indexes and columns, we have to specify this accordingly with the index_col and header parameters.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=[10,8])
sns.heatmap(
distance_df,
xticklabels=False,
yticklabels=False)
Explanation: Our dataset shows the cosine distances between all unit test cases (test_method) respectively to the called production methods. That means if two test cases invoke absolutely the same production methods, the distance is 0. If there are only a few calls to the same production method, the distance is something between 0 and 1 depending on the similar number of those calls. If the test cases call completely different production methods, the distance is 1.
We've spotted some interesting test cases manually and discussed these in detail in the previous blog post. Now we want to do this a little bit more automated. Let's visualize the data first to see what we've achieved so far.
Basic Visualization
A quick way to visualize a distance matrix is using a heat map plot. Fortunately, the library seaborn provides a nice little method that we can use to visualize our matrix distance_df without any hassle. We just remove the labeling for both axes to get a better view of the data.
End of explanation
sns.heatmap(distance_df.iloc[:10,:10])
Explanation: This heat map enables us to have a quick look at patterns of the whole data graphically. Black spots show groups of test methods that call the same production methods while light pink colored areas signal disjunct calls to the production code. The ordering of the entries of the heat map is alphabetically by the class names of the test methods and those class names tend to begin with the same prefix (like Comment or Todo).
Let's have a closer look at the upper left corner of the heat map. It shows the ten first entries of the matrix with the test classes AddCommentTest and AddSchedulingDateTest.
End of explanation
from sklearn.manifold import MDS
# uses a fixed seed for random_state for reproducibility
model = MDS(dissimilarity='precomputed', random_state=10)
# this could take some seconds
distance_df_2d = model.fit_transform(distance_df)
distance_df_2d[:5]
Explanation: Discussion
The color for the distance of the test methods of the AddCommentTest class is deeply dark. That means that the tests in the test class are absolutely similar regarding their structure. If our goal would be the reduction of test code, we could think about merging some test cases together or using some kind of parametrized test execution to avoid duplications.
In contrast, the AddSchedulingDateTest shows a more diverse coloring: The test methods addDateToScheduling and addTwoDatesToScheduling are structurally almost identical (given that one test just adds another date, this makes perfect sense). The more orange colored test failsIfSchedlindIdIsNotExisting could be a reason (beneath the typo) for further investigations because it differs almost too much from the other test cases. Maybe this test case can be moved to a more dedicated test class (for checking the correct generation of ids in our case).
Advanced Visualization
Unfortunately, the 422x422 big distance matrix distance_df isn't a good way to spot similarities very efficiently. There are areas of test similarities that don't occur along the diagonal. Fortunately, there are many ways to improve this situation.
In this blog post, we want to break down the multidimensional result into a two dimensional representation using multidimensional scaling (MDS). MDS tries to find a representation of our 422-dimensional data set into a two-dimensional space while retaining the distance information between all data points (= test methods). We can use the machine learning library scikit-learn that provides an implementation for multidimensional scaling out of the box.
Pandas' DataFrame just integrates very nicely with the MDS module of scikit-learn, too. So we just have to say that we want to use our precomputed dissimilarity matrix distance_df as measures for the distance information. We then can let MDS figure out a suitable two-dimensional representation of our dataset as well as a suitable transformation by using the fit_transform method.
End of explanation
%matplotlib inline
from matplotlib import cm
import matplotlib.pyplot as plt
# brew some colors
relative_index = distance_df.index.labels[0].values() / distance_df.index.labels[0].max()
colors = [x for x in cm.hsv(relative_index)]
# plot the 2D matrix with colors
plt.figure(figsize=(8,8))
x = distance_df_2d[:,0]
y = distance_df_2d[:,1]
plt.scatter(x, y, c=colors)
Explanation: Next, we plot the now two-dimensional matrix with matplotlib. We colorize all data points according to the name of the test classes (= the first level of distance_df's index). We can achieve this by assigning each type to a number within 0 and 1 (relative_index) and draw a color from a predefined color spectrum (cm.hsv in this case) for each type. With this, each test class gets its own color. This enables us to quickly reason about test classes that belong together structurally.
End of explanation
from sklearn.cluster import DBSCAN
dbscan = DBSCAN(eps=0.08, min_samples=10)
clustering_results = dbscan.fit(distance_df_2d)
clustering_results
Explanation: Discussion
We now have the visual information about test methods that call similar production code in 2D.
Groups of data points of the same color (like the blue colored ones in the lower middle) show that there is a high cohesion of test methods within the test classes that test the corresponding production code.
Groups of data points with mixed colored data points (like in the upper middle) show test methods from different test classes that call the similar production code.
With this representation, we can have a look at the various groups to check if the groupings are OK or if we have to restructure some test cases because of too much similarity or confusion of responsibilities.
Clustering
Let's quickly find both types of groupings programmatically by using another machine learning technique: density-based clustering! With this technique, we can let find data points that are very close together automatically. Again, we can use scikit-learn with its DBSCAN implementation to identify data points that are close together. We plot this information into the plot above to visualize dense groups of data.
For the parameters eps (~ maximal distance between the data points so that they are seen as a group) and min_samples (~ minimal distance between the data points to be considered as groups), we choose the right values in an iterative manner until we've got the groupings that we would otherwise have identified visually.
End of explanation
plt.figure(figsize=(8,8))
cluster_members = clustering_results.components_
# plot all data points
plt.scatter(x, y, c='k', alpha=0.2)
# plot cluster members
plt.scatter(
cluster_members[:,0],
cluster_members[:,1],
c='r', s=100, alpha=0.1)
Explanation: We plot all data points of our distance matrix together with the found members of clusters (components_) in one scatter plot.
End of explanation
clustered_tests = pd.DataFrame(index=distance_df.index)
clustered_tests['cluster'] = clustering_results.labels_
cohesive_tests = clustered_tests[clustered_tests.cluster != -1]
cohesive_tests.head()
Explanation: The scatter plot confirms what we've seen as humans with our eyes: There are some groupings that belong together by forming a dense cluster. We cann access these data points e. g. by their cluster labels labels_ and throw away all non-cluster data points which label's value is -1:
End of explanation
test_methods_and_classes_per_cluster = \
cohesive_tests.reset_index() \
.groupby("cluster").test_type \
.agg({"nunique", "count"})
test_methods_and_classes_per_cluster.head()
Explanation: We now can take a look at various metrics like the number of classes that declare those methods (nunique) and the number of cluster members aka test methods (count).
End of explanation
test_classes = cohesive_tests.reset_index().groupby("cluster").test_type.apply(set)
test_classes
Explanation: We can also see which test classes belong to a cluster.
End of explanation
test_analysis_result = test_methods_and_classes_per_cluster.join(test_classes)
test_analysis_result
Explanation: If we join both DataFrames, we get a nice summary of clusters with test classes we should have a deeper look into.
End of explanation
def print_results(series):
print(
"Cluster {} contains {} test methods in {} test classes."\
.format(series.name, series['count'], series['nunique']))
print(" The test classes are:")
for test_class in series['test_type']:
print(" -{}".format(test_class))
print("-"*60)
test_analysis_result.apply(print_results, axis=1);
Explanation: For a more actionable representation of our findings, let's print the results in a good old, console-like way.
End of explanation |
15,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Authorization & securitySchemes
OAS allows to specify authorization policies in the spec,
under components.securitySchemes.
Between supported security schemes we have
Step3: Add user support in get_echo
Operations like get_echo gets user informations via the user param.
Modify get_echo in api.py such that
Step4: Test the my_auth implementation
Run the spec in the terminal
with the usual
connexion run /code/notebooks/oas3/ex-06-01-auth-ok.yaml
Step6: Bearer token & JWT Security
Bearer tokens are supported | Python Code:
# Test here the my_auth implementation.
def my_auth(username, password,required_scopes=None):
An dummy authentication function.
:params: username, the username
:params: password, the password
:params: scopes, the scope
:returns: `{"sub": username, "scope": ""}` on success,
None on failure
raise NotImplementedError
Explanation: Authorization & securitySchemes
OAS allows to specify authorization policies in the spec,
under components.securitySchemes.
Between supported security schemes we have:
oauth and oidc
basic auth
JWT
mutualTLS ( in OAS 3.1 )
When passing credentials in HTTP headers or payload you MUST use TLS
connexion can reference a python function via x-basicInfoFunc.
Authenticated operations gets user informations via the user parameter.
OAS3: basic auth
Here we are defining the myBasicAuth security scheme.
components:
securitySchemes:
myBasicAuth:
type: http
scheme: basic
x-basicInfoFunc: security.my_auth
We can then reference myBasicAuth in one or more paths
paths:
/echo
get:
security:
- myBasicAuth: []
...
operationId: api.get_echo
...
OAS3 Exercise: add securitySchemes
Modify the OAS3 spec in ex-06-01-auth.yaml and:
add a myBasicAuth security schemes like the above;
reference myBasicAuth in get /echo path;
validate the spec in your swagger editor and
check what changed in the swagger-ui
Implement my_auth
Implement the my_auth function in security.py so that:
when username == password the user is authenticated
Use the cell below to implement it,
End of explanation
def get_echo(tz, user=None):
:param: tz, the timezone
:param: user, the authenticated user passed by `connexion`
raise NotImplementedError
Explanation: Add user support in get_echo
Operations like get_echo gets user informations via the user param.
Modify get_echo in api.py such that:
unauthenticated replies returns a 401 http status
the authenticated reply contains user informations {"timestamp": "2019-01-01T21:04:00Z", "user": "jon"}
End of explanation
render_markdown(
f'''
Play a bit with
[Swagger UI]({api_server_url("ui")})
''')
# Try to curl some requests!
!curl http://localhost:5000/datetime/v1/echo -kv
Explanation: Test the my_auth implementation
Run the spec in the terminal
with the usual
connexion run /code/notebooks/oas3/ex-06-01-auth-ok.yaml
End of explanation
def decode_token(token):
:param: token, a generic token
:return: a json object compliant with RFC7662 OAUTH2
if token_is_valid:
return {
'iss': 'http://api.example.com',
'iat': 1561296569,
'exp': 1561296571,
'sub': 'ioggstream',
'scope': ['local']
}
Explanation: Bearer token & JWT Security
Bearer tokens are supported:
in OAS via the scheme: bearer
in connexion via the x-bearerInfoFunc
```
components:
securitySchemes:
jwt:
type: http
scheme: bearer
bearerFormat: JWT
x-bearerInfoFunc: security.decode_token
```
Once you send the header
Authorization: Bearer token
the token string will be passed to a function like the following
NOTE: the bearerFormat is a free identifier of the token format and the associated syntax may not be enforced by the spec.
End of explanation |
15,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Train embeddings on TPU using Autoencoder
Overview
This colab explores how to train autoencoders on a TPU device.
For this colab, consider the following scenario
Step2: Get data
Step3: Function to visualize our images and pick the first image from the test set
Step4: The first image from the test set is the number 7
Step5: MNIST setup
There are 10 classes (one for each digit) and each image is 28 by 28 pixels
Step6: Original model
Here is a contrived example where the training happens only on the corners of the MNIST image.
Suppose that your original model, the fully-connected one layer network, was too computationally heavy, in terms of resources, and thus you could only afford to train on parts of the images. Instead of training on 28 by 28 pixels (784 pixels), you train on 14 by 14 pixels (196 pixels). This colab will later show that just by adding 49 more pixels to each training example, the size of each embedding, accuracy can be significantly increased.
This way you introduce minimal changes to an original model while gaining benefits from a heavy computational task that you can be offload to a TPU.
Step7: The first image corner from the test set
Step8: Create a model with one fully-connected layer
Step9: Train and evaluate the fully-connected one layer model on CPU
As expected, the model performs poorly, but it does train fairly quickly. Expected accuracy is 65%.
Step10: Create an autoencoder and make sure to get back an encoder as well
Step11: Train the autoencoder on TPU
This is a computationally resource expensive operation that can be offloaded to the TPU.
Step12: Produce image embeddings
Now that the autoencoder is trained, you can use the encoder part to produce image embeddings.
Step13: Produce image reconstructions
Let's visually see the quality of our autoencoder to see how the number 7 from above is reconstructed.
Step14: Reconstructed number 7
This looks like the number 7 so now you can be confident in the quality of our embeddings. The autoencoder learned to compress and uncompress information accurately.
Step15: Check the original image
Remember, the image in the previous section is the reconstructed image. Compare it to the original image, as shown here.
Step16: Examine the embedding for the number 7
Step17: Augment the corners dataset
The following code augments the corners dataset with embeddings trained on TPU.
Step18: Retrain the original model
At this point, you can train the original model using the augmented dataset. You should verify that the TPU embeddings augmented model works better than without embeddings. Expected accuracy is 87%. | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from absl import logging
logging.set_verbosity(logging.ERROR)
# Initialize TPU Strategy.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
Explanation: Train embeddings on TPU using Autoencoder
Overview
This colab explores how to train autoencoders on a TPU device.
For this colab, consider the following scenario: you have an image classification model that you want to improve by adding some additional features. The features that you can add to the model could be image embeddings that can be separately trained on a TPU.
This example uses a fully-connected one layer model as the model that you want to make better with additional features trained on a TPU.
Learning objectives
In this Colab, you will learn how to
* Build a fully-connected one layer model to classify images
* Build an autoencoder and train on those images, in an unsupervised fashion, to produce image embeddings
* Retrain a fully-connected one layer model with additonal features, the embeddings
Check that you have enabled TPUs in this notebook
End of explanation
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
y_train, y_test = y_train.astype(np.int32), y_test.astype(np.int32)
Explanation: Get data
End of explanation
def show_img(img):
plt.figure()
plt.imshow(img)
plt.grid(False)
plt.show()
img = 0
Explanation: Function to visualize our images and pick the first image from the test set
End of explanation
show_img(x_test[img].reshape(28, 28))
Explanation: The first image from the test set is the number 7
End of explanation
NUM_CLASSES = 10
# input image dimensions
IMG_ROWS, IMG_COLS = 28, 28
x_train = x_train.reshape(x_train.shape[0], IMG_ROWS, IMG_COLS, 1)
x_test = x_test.reshape(x_test.shape[0], IMG_ROWS, IMG_COLS, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = x_train / 255.0
x_test = x_test / 255.0
Explanation: MNIST setup
There are 10 classes (one for each digit) and each image is 28 by 28 pixels
End of explanation
x_train_corners = x_train[:, :14, :14, :]
x_test_corners = x_test[:, :14, :14, :]
Explanation: Original model
Here is a contrived example where the training happens only on the corners of the MNIST image.
Suppose that your original model, the fully-connected one layer network, was too computationally heavy, in terms of resources, and thus you could only afford to train on parts of the images. Instead of training on 28 by 28 pixels (784 pixels), you train on 14 by 14 pixels (196 pixels). This colab will later show that just by adding 49 more pixels to each training example, the size of each embedding, accuracy can be significantly increased.
This way you introduce minimal changes to an original model while gaining benefits from a heavy computational task that you can be offload to a TPU.
End of explanation
show_img(x_test_corners[img].reshape(14, 14))
Explanation: The first image corner from the test set
End of explanation
def get_model(input_shape):
ip = tf.keras.layers.Input(shape=input_shape)
x = tf.keras.layers.Flatten()(ip)
x = tf.keras.layers.Dense(NUM_CLASSES, activation='sigmoid')(x)
model = tf.keras.models.Model(ip, x)
return model
with strategy.scope():
model0 = get_model(x_train_corners[0].shape)
model0.compile(
optimizer=tf.optimizers.SGD(learning_rate=0.05),
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
Explanation: Create a model with one fully-connected layer
End of explanation
model0.fit(x_train_corners, y_train, epochs=3, batch_size=128)
model0.evaluate(x_test_corners, y_test)
Explanation: Train and evaluate the fully-connected one layer model on CPU
As expected, the model performs poorly, but it does train fairly quickly. Expected accuracy is 65%.
End of explanation
def get_autoencoder_and_encoder(input_shape):
ip = tf.keras.layers.Input(shape=input_shape)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same')(ip)
x = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(x)
x = tf.keras.layers.Conv2D(1, (3, 3), activation='relu', padding='same')(x)
encoded = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(x)
x = tf.keras.layers.Conv2DTranspose(1, (3, 3), activation='relu', strides=2, padding='same')(encoded)
x = tf.keras.layers.Conv2DTranspose(32, (3, 3), activation='relu', strides=2, padding='same')(x)
decoded = tf.keras.layers.Conv2DTranspose(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = tf.keras.models.Model(ip, outputs=decoded)
encoder = tf.keras.models.Model(ip, encoded)
return autoencoder, encoder
Explanation: Create an autoencoder and make sure to get back an encoder as well
End of explanation
tf.keras.backend.clear_session()
with strategy.scope():
autoencoder, encoder = get_autoencoder_and_encoder(x_train[0].shape)
autoencoder.compile(
optimizer=tf.optimizers.Adam(),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.metrics.BinaryAccuracy()])
autoencoder.fit(
x_train,
x_train,
batch_size=128,
epochs=3,
steps_per_epoch=468,
validation_data=(x_test, x_test))
Explanation: Train the autoencoder on TPU
This is a computationally resource expensive operation that can be offloaded to the TPU.
End of explanation
x_train_embeddings = encoder.predict(x_train)
x_test_embeddings = encoder.predict(x_test)
Explanation: Produce image embeddings
Now that the autoencoder is trained, you can use the encoder part to produce image embeddings.
End of explanation
x_test_hat = autoencoder.predict(x_test[:8])
Explanation: Produce image reconstructions
Let's visually see the quality of our autoencoder to see how the number 7 from above is reconstructed.
End of explanation
show_img(x_test_hat[img].reshape(28, 28))
Explanation: Reconstructed number 7
This looks like the number 7 so now you can be confident in the quality of our embeddings. The autoencoder learned to compress and uncompress information accurately.
End of explanation
show_img(x_test[0].reshape(28, 28))
Explanation: Check the original image
Remember, the image in the previous section is the reconstructed image. Compare it to the original image, as shown here.
End of explanation
show_img(x_test_embeddings[0].reshape(7, 7))
Explanation: Examine the embedding for the number 7
End of explanation
x_train_augmented = np.concatenate([x_train_corners.reshape(60000, 14*14, 1), x_train_embeddings.reshape(60000, 7*7, 1)], axis=1)
x_test_augmented = np.concatenate([x_test_corners.reshape(10000, 14*14, 1), x_test_embeddings.reshape(10000, 7*7, 1)], axis=1)
Explanation: Augment the corners dataset
The following code augments the corners dataset with embeddings trained on TPU.
End of explanation
with strategy.scope():
model1 = get_model(x_train_augmented[0].shape)
model1.compile(
optimizer=tf.optimizers.SGD(learning_rate=0.06),
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model1.fit(x_train_augmented, y_train, epochs=3, batch_size=128)
model1.evaluate(x_test_augmented, y_test)
Explanation: Retrain the original model
At this point, you can train the original model using the augmented dataset. You should verify that the TPU embeddings augmented model works better than without embeddings. Expected accuracy is 87%.
End of explanation |
15,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BI kurzus
Bővítőcsomagok importálása
Step1: Romániai lakosság letöltése INSSE-ról
Step2: Wikipédia táblázatok letöltése
Step3: Ha html5llib not found hibaüzenetet kapunk, akkor egy konzol (Command Prompt, Parancssor) megnyitásával és a conda install html5lib vagy pip install html5lib parancsokal telepítjük. Ezután újra kell indítani a Jupyter-t.
Step4: A táblázatlistából nincsen szükség csak a 5. (tehát 4-es indexű, 0-tól kezdődik) táblázatra. Ezt mentsük el az gf változóba, aminek a típusa egy pandas dataframe lesz.
Step5: Csak az 1-től 4-ig terjedő sorok van szükség, a többit eldobjuk.
Ezután a 0. sort beállítjuk indexnek. Miután ez megtörtént, ezt is eldobjuk a sorok közül.
Step6: Transzponáljuk a táblázatot
Step7: D3plus-ba betölthető json formátumban elmentjük a táblázat tartalmát.
Ezt úgy érhetük el, hogy végigmegyunk a táblázat értékein minden sorban majd minden oszlopban. Vigyázzunk a magyar karaterekre, ezért fontos az unicode rendszerbe való konvertálás.
A táblázatban tárlot értékek string-ek, ezeket egész számokká konvertáljuk, figyelembe véve a pozitív/negatív értékek formátumát.
Step8: Az eredmény
Step9: Elmentjük a fájlt | Python Code:
import pandas as pd
import html5lib
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: BI kurzus
Bővítőcsomagok importálása:
End of explanation
#https://www.csaladen.es/present/sapientia1/exportPivot_POP105A.csv
csv_path='exportPivot_POP105A.csv' #SAJAT HELY CSV FILE
df=pd.read_csv(csv_path)
df.head()
Explanation: Romániai lakosság letöltése INSSE-ról:
Előzetesen letöltött fájlútvonalt használunk.
End of explanation
wiki_path="http://hu.wikipedia.org/wiki/Csíkszereda"
Explanation: Wikipédia táblázatok letöltése
End of explanation
df2=pd.read_html(wiki_path)
#ha unicode error
df2=pd.read_html('https://hu.wikipedia.org/wiki/Cs%EDkszereda')
df2[4]
Explanation: Ha html5llib not found hibaüzenetet kapunk, akkor egy konzol (Command Prompt, Parancssor) megnyitásával és a conda install html5lib vagy pip install html5lib parancsokal telepítjük. Ezután újra kell indítani a Jupyter-t.
End of explanation
gf=df2[3]
gf
Explanation: A táblázatlistából nincsen szükség csak a 5. (tehát 4-es indexű, 0-tól kezdődik) táblázatra. Ezt mentsük el az gf változóba, aminek a típusa egy pandas dataframe lesz.
End of explanation
gf[1:4]
ef=gf[1:4]
ef.columns=ef.loc[ef.index[0]]
ef=ef.drop(1)
ef=ef.set_index(ef.columns[0])
ef=ef.drop(u'Év',axis=1)
ef
Explanation: Csak az 1-től 4-ig terjedő sorok van szükség, a többit eldobjuk.
Ezután a 0. sort beállítjuk indexnek. Miután ez megtörtént, ezt is eldobjuk a sorok közül.
End of explanation
rf=ef.T
rf.head(2)
Explanation: Transzponáljuk a táblázatot:
End of explanation
#uj=[[] for i in range(len(rf.columns))]
d3=[]
ujnevek=['ujmax','ujmin']
for k in range(len(rf.index)):
i=rf.index[k]
seged={}
for j in range(len(rf.loc[i])):
uc=rf.loc[i][j]
if ',' in uc:
ertek=-int(uc[1:-2])
else:
ertek=int(uc[0:-1])
#uj[j].append(ertek)
seged[ujnevek[j]]=ertek
seged["honap"]=rf.index[k]
seged["honap2"]=k+1
d3.append(seged)
Explanation: D3plus-ba betölthető json formátumban elmentjük a táblázat tartalmát.
Ezt úgy érhetük el, hogy végigmegyunk a táblázat értékein minden sorban majd minden oszlopban. Vigyázzunk a magyar karaterekre, ezért fontos az unicode rendszerbe való konvertálás.
A táblázatban tárlot értékek string-ek, ezeket egész számokká konvertáljuk, figyelembe véve a pozitív/negatív értékek formátumát.
End of explanation
d3
Explanation: Az eredmény:
End of explanation
import json
open('uj.json','w').write(json.dumps(d3))
Explanation: Elmentjük a fájlt:
End of explanation |
15,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook is based on the 2016 AAS Python Workshop tutorial on tables, available on GitHub, though it has been modified. Some of the pandas stuff was borrowed from a notebook put together by Jake Vanderplas and has been modified to suit the purposes of this course, including expansion/modification of explanations and additional exercises. Source and license info for the original is on GitHub</i></small>
Names
Step1: 1. Astropy Tables
The astropy Table class provides an extension of NumPy structured arrays for storing and manipulating heterogeneous tables of data. A few notable features of this package are
Step2: 1.2 Displaying Tables in Notebook
In IPython notebook, showing a table will produce a nice HTML representation of the table
Step3: If you did the same in a terminal session you get a different view that isn't as pretty but does give a bit more information about the table
Step4: To get the table column names and data types using the colnames and dtype properties
Step5: Astropy 1.1 and later provides a show_in_notebook() method that allows more interactive exploration of tables. It can be especially handy for large tables.
Step6: 1.3 Indexing Tables
We can access the columns and rows in a way similar to accessing discionary entries (with dict[key]), but here the syntax is table[column]. Table objects can also be indexed by row or column, and the column index can be swapped with column name.
Step7: 1.4 Modifying Tables
Once the table exists with defined columns there are a number of ways to modify the table in place. These are fully documented in the section Modifying a Table.
To give a couple of simple examples, you can add rows with the add_row() method or add new columns using dict-style assignment
Step8: <div class=sidebar>
### Sidebar - Formatting Output
Notice that the `logflux` column really has too many output digits given the precision of the input values. We can fix this by setting the format using normal Python formatting syntax
Step9: 1.5 Converting Tables to Numpy
Sometimes you may not want or be able to use a Table object and prefer to work with a plain numpy array (like if you read in data and then want to manipulate it. This is easily done by passing the table to the np.array() constructor.
This makes a copy of the data. If you have a huge table and don't want to waste memory, supply copy=False to the constructor, but be warned that changing the output numpy array will change the original table.
Step10: 1.6 Masking Tables
One of the most powerful concepts in table manipulation is using boolean selection masks to select only table entries that meet certain criteria.
Step11: 1.7 High-Level Table Operations
So far we've just worked with one table at a time and viewed that table as a monolithic entity. Astropy also supports high-level Table operations that manipulate multiple tables or view one table as a collection of sub-tables (groups).
Documentation | Description
---------------------------------------------------------------------------------------- |-----------------------------------------
Grouped operations | Group tables and columns by keys
Stack vertically | Concatenate input tables along rows
Stack horizontally | Concatenate input tables along columns
Join | Database-style join of two tables
Here we'll just introduce the join operation but go into more detail on the others in the exercises.
Step12: Now recall our original table t
Step13: Now say that we now got some additional flux values from a different reference for a different, but overlapping sample of sources
Step14: Now we can get a master table of flux measurements which are joined matching the values the name column. This includes every row from each of the two tables, which is known as an outer join.
Step15: Alternately we could choose to keep only rows where both tables had a valid measurement using an inner join
Step16: 1.8 Writing and Reading Tabular Data
You can write data using the Table.write() method
Step17: You can read data using the Table.read() method
Step18: Some formats, such as FITS and HDF5, are automatically identified by file extention while most others will require format to be explicitly provided. A number of common ascii formats are supported such as IPAC, sextractor, daophot, and CSV. Refer to the documentation for a full listing.
Step19: 2. Pandas
Although astropy Tables has some nice functionality that Pandas doesn't and is also a simpler, easier to use package, Pandas is the more versatile and commonly used table manipluator for Python so I recommend you use it wherever possible.
Astropy 1.1 includes new to_pandas() and from_pandas() methods that facilitate conversion to/from pandas DataFrame objects. There are a few caveats in making these conversions
Step20: Data frames are defined like dictionaries with a column header/label (similar to a key) and a list of entries.
Step21: think of DataFrames as numpy arrays plus some extra pointers to their columns and the indices of the row entries that make them amenable for tables
Step22: pandas has built-in functions for reading all kinds of types of data. In the cell below, hit tab once after the r to see all of the read functions. In our case here, read_table will work fine
Step23: we can also convert the table that we already made with Astropy Tables to pandas dataframe format
Step24: And the opposite operation (conversion from pandas dataframe to astropy table) works as well
Step25: Unlike astropy Tables, pandas can also read excel spreadsheets
Step26: pandas dataframe columns can be called as python series using the syntax dataframe.columnlabel, as below, which is why it usually makes sense to define a column name/label that is short and has no spaces
Step27: this calling method allows you to do use some useful built-in functions as well
Step28: To pull up individual rows or entries, the fact that pandas dataframes always print the indices of rows off of their lefthand side helps. You index dataframes with .loc (if using column name) or .iloc (if using column index), as below
Step29: you can always check that the column you're indexing is the one you want as below
Step30: Although indices are nice for reference, sometimes you might want the row labels to be more descriptive. What is the line below doing?
Step31: You can do lots more with this as well, including logical operations to parse the table
Step32: 3. Exercises
In these exercises, you will be dealing with two tables of information, described below. We'll be doing lots of manipulation of pandas dataframes in Labs 9, 11 and 13, so these exercises focus mostly on special functions of Astropy tables, but you should, wherever possible, try to figure out how to do the same thing with a Pandas dataframe.
master_sources
Each distinct X-ray source identified on the sky is represented in the catalog by a single "master source" entry and one or more "source observation" entries, one for each observation in which the source has been detected. The master source entry records the best estimates of the properties of a source, based on the data extracted from the set of observations in which the source has been detected. The subset of fields in our exercise table file are
Step33: <div class=hw>
Get a list of the column names for each table.
*Hint
Step34: <div class=hw>
Find and print the length of each table.
Step35: <div class=hw>
Find the column datatypes for each table (also a built-in method).
Step36: <div class=hw>
Display all the rows of the `master_sources` table using its `pprint()` method (astropy tables only).
Step37: <div class=hw>
### Exercise 2 - Modifying tables
-----------------------
Remove the `obi` column from the `obs_sources` table.
Step38: <div class=hw>
The `gti_obs` column name is a bit obscure (GTI is "good time interval", but it really just means "date"). Rename the `gti_obs` column to `obs_date`.
Step39: <div class=hw>
The source count column tells you how many photons were collected by the detector, but it would also be nice to have a count rate (number of photons per second). Add a new column `src_rate_aper_b` which is the source counts divided by observation duration in sec.
Step40: <div class=hw>
### Exercise 3 - Visualizing Data
Use the matplotlib [`hist()`]( http
Step41: <div class=hw>
Let's now remove any sources that we think might not be associated with the source we pointed at ("background/foreground objects" - things that appear near the location of our source in the sky, but that aren't physically associated with it and are actually either much closer or much farther away). To remove these potentially unassociated objects, make the same plot but using only observations where the source was within 4 arcminutes of the place where the telescope was pointed. *HINT*
Step42: <div class=hw>
### Exercise 4 - Join the master_sources and obs_sources tables
The `master_sources` and `obs_sources` tables share a common `msid` column. What we now want is to join the master list of sky positions (RA and Dec columns - essentially celestial longitude and latitude) and source names with the individual observations table.
Use the [table.join()](http
Step43: <div class=hw>
Is the length of the new `sources` the same as `obs_sources`? What happened? Use specific examples in your explanations.
Step44: insert explanation here
<div class=hw>
### Exercise 5 - Grouped properties of `sources`
---------------------------
When using tables, we may occasionally wish to group entries based on various properties, which is done using the [`group_by()`](http
Step45: <div class=hw>
The new `g_sources` table is just a regular table with all the `sources` in a particular order. The attribute `g_sources.groups` has also been created and is an object that provides access to the `msid` sub-groups. You can access the $i^{th}$ group with `g_sources.groups[i]`.
In addition the `g_sources.groups.indices` attribute is an array with the indicies of the group boundaries.
Using `np.diff()` find the number of repeat observations of each master sources. *HINT*
Step46: <div class=hw>
Print the 50th group and note which columns are the same for all group members and which are different. Does this make sense? In these few observations how many different target names were provided by observers?
Step47: <div class=hw>
### Exercise 6 - Aggregation
----------------------
The real power of grouping comes in the ability to create aggregate values for each of the groups, for instance the mean flux for each unique source. This is done with the [`aggregate()`](http
Step48: <div class=hw>
Notice that aggregation cannot form a mean for certain columns and these are dropped from the output. Use the `join()` function to restore the `master_sources` information to `g_sources_mean`. | Python Code:
from astropy.table import Table
from numpy import *
import matplotlib
matplotlib.use('nbagg') # required for interactive plotting
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <small><i>This notebook is based on the 2016 AAS Python Workshop tutorial on tables, available on GitHub, though it has been modified. Some of the pandas stuff was borrowed from a notebook put together by Jake Vanderplas and has been modified to suit the purposes of this course, including expansion/modification of explanations and additional exercises. Source and license info for the original is on GitHub</i></small>
Names: [Insert Your Names Here]
Lab 7 - Python Tables
Lab 7 Contents
Astropy Tables
Constructing Tables
Displaying Tables in Notebook
Indexing Tables
Modifying Tables
Converting Tables to Numpy
Masking Tables
High-level Table Operations
Reading and Writing Tabular Data
Pandas
End of explanation
t = Table()
t['name'] = ['larry', 'curly', 'moe', 'shemp']
t['flux'] = [1.2, 2.2, 3.1, 4.3]
Explanation: 1. Astropy Tables
The astropy Table class provides an extension of NumPy structured arrays for storing and manipulating heterogeneous tables of data. A few notable features of this package are:
Initialize a table from a wide variety of input data structures and types.
Modify a table by adding or removing columns, changing column names, or adding new rows of data.
Handle tables containing missing values.
Include table and column metadata as flexible data structures.
Specify a description, units and output formatting for columns.
Perform operations like database joins, concatenation, and grouping.
Manipulate multidimensional columns.
Methods for Reading and writing Table objects to files
Integration with Astropy Units and Quantities
or more information about the features and functionalities of Astropy tables, you can read the
astropy.table docs.
<div class=sidebar>
### Sidebar - Tables vs. Pandas DataFrames
The [Pandas](http://pandas.pydata.org/pandas-docs/stable/) package provides a powerful, high-performance table object via the [DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html#pandas.DataFrame) class. Pandas has a few downsides, including its lack of support for multidimensional table columns, but Pandas is the generally-used Python tables packange, and so we will use it here as well. Pandas DataFrame functionality is very complementary to astropy Tables so astropy 1.1 and later provides interfaces for converting between astropy Tables and DataFrames. If you wish to learn more about Pandas, there are many resources available on-line. A good starting point is the main tutorials site at http://pandas.pydata.org/pandas-docs/stable/tutorials.html.
### 1.1 Constructing Tables
There is great deal of flexibility in the way that a table can be initially constructed:
- Read an existing table from a file or web URL
- Add columns of data one by one
- Add rows of data one by one
- From an existing data structure in memory:
- List of data columns
- Dict of data columns
- List of row dicts
- Numpy homgeneous array or structured array
- List of row records
See the documentation section on [Constructing a table](http://astropy.readthedocs.org/en/stable/table/construct_table.html) for the gory details and plenty of examples.
End of explanation
t
Explanation: 1.2 Displaying Tables in Notebook
In IPython notebook, showing a table will produce a nice HTML representation of the table:
End of explanation
print(t)
##similar, but nicer when there are lots and lots of rows/columns
t.pprint()
Explanation: If you did the same in a terminal session you get a different view that isn't as pretty but does give a bit more information about the table:
>>> t
<Table rows=4 names=('name','flux')>
array([('source 1', 1.2), ('source 2', 2.2), ('source 3', 3.1),
('source 4', 4.3)],
dtype=[('name', 'S8'), ('flux', '<f8')])
To get a plain view which is the same in notebook and terminal use print():
End of explanation
t.colnames
t.dtype
Explanation: To get the table column names and data types using the colnames and dtype properties:
End of explanation
t.show_in_notebook()
Explanation: Astropy 1.1 and later provides a show_in_notebook() method that allows more interactive exploration of tables. It can be especially handy for large tables.
End of explanation
t['flux'] # Flux column (notice meta attributes)
t['flux'][1] # Row 1 of flux column
t[1]['flux'] # equivalent!
t[1][1] # also equivalent. Which is the column index? Play with this to find out.
t[1] # one index = row number
t[1:3] # 2nd and 3rd rows in a new table (remember that the a:b indexing is not inclusive of b)
t[1:3]['flux']
t[[1, 3]] # the second and fourth rows of t in a new table
Explanation: 1.3 Indexing Tables
We can access the columns and rows in a way similar to accessing discionary entries (with dict[key]), but here the syntax is table[column]. Table objects can also be indexed by row or column, and the column index can be swapped with column name.
End of explanation
t.add_row(('joe', 10.1)) # Add a new source at the end
t['logflux'] = log10(t['flux']) # Compute the log10 of the flux
t
Explanation: 1.4 Modifying Tables
Once the table exists with defined columns there are a number of ways to modify the table in place. These are fully documented in the section Modifying a Table.
To give a couple of simple examples, you can add rows with the add_row() method or add new columns using dict-style assignment:
End of explanation
t['flux'].format = '%.2f'
t['logflux'].format = '%.2f'
t
print('%11.2f'% 100000)
print('%8.2f'% 100000)
t['flux'].format = '%5.2e'
t['logflux'].format = '%.2E'
t
print('%5.2e'% 0.0005862341)
print('%4.2E'% 246001)
Explanation: <div class=sidebar>
### Sidebar - Formatting Output
Notice that the `logflux` column really has too many output digits given the precision of the input values. We can fix this by setting the format using normal Python formatting syntax:
The format operator in python acts on an object and reformatts it according to your specifications. The syntax is alwasy object.format = '%format_string', where format_string tells it how to format the output. For now let's just deal with two of the more useful types:
***Float Formatting***
Floats are denoted with '%A.Bf', where A is the number of total characters you want, including the decimal point, and B is the number of characters that you want after the decimal. The f tells it that you would like the output as a float. If you don't specify A, python will keep as many characters as are currently to the left of the decimal point. If you specify more characters to the left of the decimal than are there, python will usually print the extra space as blank characters. If you want it to print leading zeroes instead, use the format '%0A.Bf'. This is not the case in tables though, where white space and leading zeroes will be ignored.
***Scientific Notation Formatting***
Sometimes in tables, we will be dealing with very large numbers. Exponential formatting is similar to float formatting in that you are formatting the float that comes before the "e" (meaning 10 to some power). Numbers in scientific notation print as X.YeNN where NN is the power of the exponent. The formatting string for floating point exponentials looks like "%A.Be" or "%A.BE", where e and E print lowwercase and capital es, respectively.
Should you need it in the future, [here](http://www.python-course.eu/python3_formatted_output.php) is a more detailed reference regarding string formatting.
Also useful is printing numbers in a given format, for which you use the syntax print('%format code'% object), as demonstrated below. Play around with the cells below to make sure you understand the subtelties here before moving on.
End of explanation
array(t)
array(t['flux'])
Explanation: 1.5 Converting Tables to Numpy
Sometimes you may not want or be able to use a Table object and prefer to work with a plain numpy array (like if you read in data and then want to manipulate it. This is easily done by passing the table to the np.array() constructor.
This makes a copy of the data. If you have a huge table and don't want to waste memory, supply copy=False to the constructor, but be warned that changing the output numpy array will change the original table.
End of explanation
mask = t['flux'] > 3.0 # Define boolean (True/False) mask for all flux values > 3
mask
t[mask] # Create a new table with only the "True" rows
t2 = Table([['x', 'y', 'z'],
[1.1, 2.2, 3.3]],
names=['name', 'value'],
masked=True)
t2
t2['value'].mask = [False, True, False]
print(t2)
t2['value'].fill_value = -99
print(t2.filled())
Explanation: 1.6 Masking Tables
One of the most powerful concepts in table manipulation is using boolean selection masks to select only table entries that meet certain criteria.
End of explanation
from astropy.table import join
Explanation: 1.7 High-Level Table Operations
So far we've just worked with one table at a time and viewed that table as a monolithic entity. Astropy also supports high-level Table operations that manipulate multiple tables or view one table as a collection of sub-tables (groups).
Documentation | Description
---------------------------------------------------------------------------------------- |-----------------------------------------
Grouped operations | Group tables and columns by keys
Stack vertically | Concatenate input tables along rows
Stack horizontally | Concatenate input tables along columns
Join | Database-style join of two tables
Here we'll just introduce the join operation but go into more detail on the others in the exercises.
End of explanation
t
Explanation: Now recall our original table t:
End of explanation
t2 = Table()
t2['name'] = ['larry', 'moe', 'groucho']
t2['flux2'] = [1.4, 3.5, 8.6]
Explanation: Now say that we now got some additional flux values from a different reference for a different, but overlapping sample of sources:
End of explanation
t3 = join(t, t2, keys=['name'], join_type='outer')
print(t3)
mean(t3['flux2'])
Explanation: Now we can get a master table of flux measurements which are joined matching the values the name column. This includes every row from each of the two tables, which is known as an outer join.
End of explanation
join(t, t2, keys=['name'], join_type='inner')
Explanation: Alternately we could choose to keep only rows where both tables had a valid measurement using an inner join:
End of explanation
t3.write('test.fits', overwrite=True)
t3.write('test.vot', format='votable', overwrite=True)
Explanation: 1.8 Writing and Reading Tabular Data
You can write data using the Table.write() method:
End of explanation
t4 = Table.read('test.fits')
t4
Explanation: You can read data using the Table.read() method:
End of explanation
Table.read?
t_2mass = Table.read("data/2mass.tbl", format="ascii.ipac")
t_2mass.show_in_notebook()
Explanation: Some formats, such as FITS and HDF5, are automatically identified by file extention while most others will require format to be explicitly provided. A number of common ascii formats are supported such as IPAC, sextractor, daophot, and CSV. Refer to the documentation for a full listing.
End of explanation
import pandas as pd
Explanation: 2. Pandas
Although astropy Tables has some nice functionality that Pandas doesn't and is also a simpler, easier to use package, Pandas is the more versatile and commonly used table manipluator for Python so I recommend you use it wherever possible.
Astropy 1.1 includes new to_pandas() and from_pandas() methods that facilitate conversion to/from pandas DataFrame objects. There are a few caveats in making these conversions:
- Tables with multi-dimensional columns cannot be converted.
- Masked values are converted to numpy.nan. Numerical columns, int or float, are thus converted to numpy.float while string columns with missing values are converted to object columns with numpy.nan values to indicate missing or masked data. Therefore, one cannot always round-trip between Table and DataFrame.
End of explanation
df = pd.DataFrame({'a': [10,20,30],
'b': [40,50,60]})
df
Explanation: Data frames are defined like dictionaries with a column header/label (similar to a key) and a list of entries.
End of explanation
df.columns
df.index
#hit shift + tab tab in the cell below to read more about dataframe objects and operations
df.
Explanation: think of DataFrames as numpy arrays plus some extra pointers to their columns and the indices of the row entries that make them amenable for tables
End of explanation
pd.r
Explanation: pandas has built-in functions for reading all kinds of types of data. In the cell below, hit tab once after the r to see all of the read functions. In our case here, read_table will work fine
End of explanation
pd_2mass = t_2mass.to_pandas()
pd_2mass
Explanation: we can also convert the table that we already made with Astropy Tables to pandas dataframe format
End of explanation
t_pd = Table.from_pandas(pd_2mass)
t_pd.show_in_notebook()
Explanation: And the opposite operation (conversion from pandas dataframe to astropy table) works as well
End of explanation
asteroids = pd.read_excel("data/asteroids5000.xlsx")
asteroids
#excel_data = Table.from_pandas(pd.read_excel("2mass.xls"))
#excel_data.show_in_notebook()
Explanation: Unlike astropy Tables, pandas can also read excel spreadsheets
End of explanation
asteroids.ra
Explanation: pandas dataframe columns can be called as python series using the syntax dataframe.columnlabel, as below, which is why it usually makes sense to define a column name/label that is short and has no spaces
End of explanation
#this one counts how many occurrences there are in the table for each unique value
asteroids.ph_qual.value_counts()
Explanation: this calling method allows you to do use some useful built-in functions as well
End of explanation
asteroids.loc[4,"ra"]
asteroids.iloc[4,0] #same because column 0 is "ra"
Explanation: To pull up individual rows or entries, the fact that pandas dataframes always print the indices of rows off of their lefthand side helps. You index dataframes with .loc (if using column name) or .iloc (if using column index), as below
End of explanation
asteroids.columns[0]
Explanation: you can always check that the column you're indexing is the one you want as below
End of explanation
# make the row names more interesting than numbers starting from zero
asteroids.index = ['Asteroid %d'%(i+1) for i in asteroids.index]
#and you can index multiple columns/rows in the usual way
asteroids.iloc[:10,:2]
Explanation: Although indices are nice for reference, sometimes you might want the row labels to be more descriptive. What is the line below doing?
End of explanation
asteroids.columns
ast_new = asteroids[asteroids.dist < 500]
ast_new
Explanation: You can do lots more with this as well, including logical operations to parse the table
End of explanation
## code to read in source data here
## code to convert to pandas dataframes here
Explanation: 3. Exercises
In these exercises, you will be dealing with two tables of information, described below. We'll be doing lots of manipulation of pandas dataframes in Labs 9, 11 and 13, so these exercises focus mostly on special functions of Astropy tables, but you should, wherever possible, try to figure out how to do the same thing with a Pandas dataframe.
master_sources
Each distinct X-ray source identified on the sky is represented in the catalog by a single "master source" entry and one or more "source observation" entries, one for each observation in which the source has been detected. The master source entry records the best estimates of the properties of a source, based on the data extracted from the set of observations in which the source has been detected. The subset of fields in our exercise table file are:
Name | Description
------ | ------------
msid | Master source ID
name | Source name in the Chandra catalog
ra | Source RA (deg)
dec | Source Dec (deg)
obs_sources
The individual source entries record all of the properties about a detection extracted from a single observation, as well as associated file-based data products, which are observation-specific. The subset of fields in our exercise table file are:
Name | Description
------ | ------------
obsid | Observation ID
obi | Observation interval
targname | Target name
gti_obs | Observation date
flux_aper_b | Broad band (0.5 - 7 keV) flux (erg/cm2/sec)
src_cnts_aper_b | Broad band source counts
ra_b | Source RA (deg)
dec_b | Source Dec (deg)
livetime | Observation duration (sec)
posid | Position ID
theta | Off-axis angle (arcmin)
msid | Master source ID
<div class=hw>
### Exercise 1 - Read the data
----------------------------
To start with, read in the two data files representing the master source list and observations source list. The fields for the two tables are respectively documented in:
- [master_sources](http://cxc.harvard.edu/csc/columns/master.html)
- [obs_sources](http://cxc.harvard.edu/csc/columns/persrc.html)
Read them in as astropy tables and then convert them to pandas. In the end you should have four table objects, two astropy tables and two pandas dataframes
End of explanation
## code to print list of column names
Explanation: <div class=hw>
Get a list of the column names for each table.
*Hint: use `<TAB>` completion to easily discover all the attributes and methods, e.g. type `master_sources.` and then hit the `<TAB>` key. This will reveal some built-in methods to do things like print column names, as well as some of the other things below*
End of explanation
## code to print the length of each table here
Explanation: <div class=hw>
Find and print the length of each table.
End of explanation
## code to print the column datatypes
Explanation: <div class=hw>
Find the column datatypes for each table (also a built-in method).
End of explanation
## code to pprint the rows of master_sources
Explanation: <div class=hw>
Display all the rows of the `master_sources` table using its `pprint()` method (astropy tables only).
End of explanation
#code to remove column here
Explanation: <div class=hw>
### Exercise 2 - Modifying tables
-----------------------
Remove the `obi` column from the `obs_sources` table.
End of explanation
# code to rename column here
Explanation: <div class=hw>
The `gti_obs` column name is a bit obscure (GTI is "good time interval", but it really just means "date"). Rename the `gti_obs` column to `obs_date`.
End of explanation
# code to create and add new column here
Explanation: <div class=hw>
The source count column tells you how many photons were collected by the detector, but it would also be nice to have a count rate (number of photons per second). Add a new column `src_rate_aper_b` which is the source counts divided by observation duration in sec.
End of explanation
# code to create histogram
Explanation: <div class=hw>
### Exercise 3 - Visualizing Data
Use the matplotlib [`hist()`]( http://matplotlib.org/api/pyplot_api.html?highlight=pyplot.hist#matplotlib.pyplot.hist) function to make a histogram of the source flux column. Since the fluxes vary by orders of magnitude, use `numpy.log10` to put the fluxes in log space.
End of explanation
# code to mask table and create new histogram
Explanation: <div class=hw>
Let's now remove any sources that we think might not be associated with the source we pointed at ("background/foreground objects" - things that appear near the location of our source in the sky, but that aren't physically associated with it and are actually either much closer or much farther away). To remove these potentially unassociated objects, make the same plot but using only observations where the source was within 4 arcminutes of the place where the telescope was pointed. *HINT*: use a boolean mask to select values of `theta` that are less than 4.0.
End of explanation
# code to join tables
Explanation: <div class=hw>
### Exercise 4 - Join the master_sources and obs_sources tables
The `master_sources` and `obs_sources` tables share a common `msid` column. What we now want is to join the master list of sky positions (RA and Dec columns - essentially celestial longitude and latitude) and source names with the individual observations table.
Use the [table.join()](http://astropy.readthedocs.org/en/stable/table/operations.html#join) function to make a single table called `sources` that has the master RA, Dec, and name included for each observation source.
*HINT*: the defaults for `keys` and `join_type='inner'` are correct in this case, so the simplest possible call to `join()` will work!
End of explanation
# code to investigate lengths
Explanation: <div class=hw>
Is the length of the new `sources` the same as `obs_sources`? What happened? Use specific examples in your explanations.
End of explanation
## code to group sources by the msid key and write into new table g_sources
Explanation: insert explanation here
<div class=hw>
### Exercise 5 - Grouped properties of `sources`
---------------------------
When using tables, we may occasionally wish to group entries based on various properties, which is done using the [`group_by()`](http://astropy.readthedocs.org/en/stable/table/operations.html#id2) functionality.
This method makes a new table in which all the sources with the same entry for some property (that property is specified in the function call) are next to one another.
Make a new table `g_sources` which is the `sources` table grouped by the `msid` key using the `group_by()` method.
End of explanation
## code to find the number of observations for each source
Explanation: <div class=hw>
The new `g_sources` table is just a regular table with all the `sources` in a particular order. The attribute `g_sources.groups` has also been created and is an object that provides access to the `msid` sub-groups. You can access the $i^{th}$ group with `g_sources.groups[i]`.
In addition the `g_sources.groups.indices` attribute is an array with the indicies of the group boundaries.
Using `np.diff()` find the number of repeat observations of each master sources. *HINT*: use the indices, Luke.
End of explanation
## code to print info for the 50th group
Explanation: <div class=hw>
Print the 50th group and note which columns are the same for all group members and which are different. Does this make sense? In these few observations how many different target names were provided by observers?
End of explanation
## code to create a new table with the group means
Explanation: <div class=hw>
### Exercise 6 - Aggregation
----------------------
The real power of grouping comes in the ability to create aggregate values for each of the groups, for instance the mean flux for each unique source. This is done with the [`aggregate()`](http://astropy.readthedocs.org/en/stable/table/operations.html#aggregation) method, which takes a function reference as its input. This function must take as input an array of values and return a single value.
Aggregate returns a new table that has a length equal to the number of groups.
Compute the mean of all columns for each unique source (i.e. each group) using `aggregate` and the `np.mean` function. Call this table `g_sources_mean`.
End of explanation
## code to add back in the columns from master_sources
from IPython.core.display import HTML
def css_styling():
styles = open("../custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: <div class=hw>
Notice that aggregation cannot form a mean for certain columns and these are dropped from the output. Use the `join()` function to restore the `master_sources` information to `g_sources_mean`.
End of explanation |
15,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extract facility generation and fuel use data
This notebook creates dataframes with monthly facility generation and fuel use data, merges them, and exports the results. The code assumes that you have already downloaded an ELEC.txt file from EIA's bulk download website.
Step1: Date string for filenames
This will be inserted into all filenames (reading and writing)
Step2: Read ELEC.txt file
Download the most current file from EIA's bulk download site. Save it to \Data storage\Raw data. I've tried to do this via the requests library, but the data file often gets corrupted.
Step3: Filter lines to only include facility generation
Include ELEC.PLANT in the series_id
Include "All" as the only allowable prime mover
Some facilities have incorrect data at the individual prime mover level
Do not include "All" as a fuel code
Only monthly frequency
Step4: Combine generation into one large dataframe
Step5: Combine total fuel use into one large dataframe
Step6: Combine total fuel use for electricity into one large dataframe
Step7: Merge dataframes
Need to be careful here because there are fuel/prime mover combinations that have generation but no fuel use (e.g. the steam cycle of a combined cycle system - but only in some cases).
Step8: Fill in missing values from the first merge
Step9: FIll in missing values from second merge and drop units/series_id columns
Step10: Add datetime and quarter columns
Step11: Load emission factors
These are mostly EIA emission factors
Step12: Apply factors to facility generation
Step13: Apply emission factors
Fuel emission factor is kg/mmbtu
Step14: Set nan and negative emissions to 0
When no fuel was used for electricity production, or when negative fuel is somehow reported by EIA, set the emissions to 0. This is implemented by filtering out all values that are greater than or equal to 0.
Step15: Export | Python Code:
import json
import pandas as pd
import os
from os.path import join
import numpy as np
from joblib import Parallel, delayed
import sys
cwd = os.getcwd()
data_path = join(cwd, '..', 'Data storage')
Explanation: Extract facility generation and fuel use data
This notebook creates dataframes with monthly facility generation and fuel use data, merges them, and exports the results. The code assumes that you have already downloaded an ELEC.txt file from EIA's bulk download website.
End of explanation
file_date = '2018-03-06'
%load_ext watermark
%watermark -iv -v
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
# add the 'src' directory as one where we can import modules
src_dir = join(os.getcwd(), os.pardir, 'src')
sys.path.append(src_dir)
%aimport Data.data_extraction
from Data.data_extraction import facility_line_to_df
%aimport Analysis.index
from Analysis.index import add_datetime, add_quarter
Explanation: Date string for filenames
This will be inserted into all filenames (reading and writing)
End of explanation
path = join(data_path, 'Raw EIA bulk', '{} ELEC.txt'.format(file_date))
with open(path, 'r') as f:
raw_txt = f.readlines()
Explanation: Read ELEC.txt file
Download the most current file from EIA's bulk download site. Save it to \Data storage\Raw data. I've tried to do this via the requests library, but the data file often gets corrupted.
End of explanation
gen_rows = [row for row in raw_txt if 'ELEC.PLANT.GEN' in row
and 'series_id' in row
and 'ALL.M' in row
and 'ALL-' not in row]
total_fuel_rows = [row for row in raw_txt if 'ELEC.PLANT.CONS_TOT_BTU' in row
and 'series_id' in row
and 'ALL.M' in row
and 'ALL-' not in row]
eg_fuel_rows = [row for row in raw_txt if 'ELEC.PLANT.CONS_EG_BTU' in row
and 'series_id' in row
and 'ALL.M' in row
and 'ALL-' not in row]
Explanation: Filter lines to only include facility generation
Include ELEC.PLANT in the series_id
Include "All" as the only allowable prime mover
Some facilities have incorrect data at the individual prime mover level
Do not include "All" as a fuel code
Only monthly frequency
End of explanation
if __name__ == '__main__':
exception_list = []
facility_gen = pd.concat(Parallel(n_jobs=-1)(delayed(facility_line_to_df)(json.loads(row)) for row in gen_rows))
facility_gen.reset_index(drop=True, inplace=True)
facility_gen.rename({'value':'generation (MWh)'}, axis=1, inplace=True)
facility_gen.loc[:,'lat'] = facility_gen.loc[:,'lat'].astype(float)
facility_gen.loc[:,'lon'] = facility_gen.loc[:,'lon'].astype(float)
facility_gen.loc[:, 'plant id'] = facility_gen.loc[:, 'plant id'].astype(int)
#drop
facility_gen.tail()
Explanation: Combine generation into one large dataframe
End of explanation
if __name__ == '__main__':
exception_list = []
facility_all_fuel = pd.concat(Parallel(n_jobs=-1)(delayed(facility_line_to_df)(json.loads(row)) for row in total_fuel_rows))
facility_all_fuel.reset_index(drop=True, inplace=True)
facility_all_fuel.rename({'value':'total fuel (mmbtu)'}, axis=1, inplace=True)
facility_all_fuel.loc[:,'lat'] = facility_all_fuel.loc[:,'lat'].astype(float)
facility_all_fuel.loc[:,'lon'] = facility_all_fuel.loc[:,'lon'].astype(float)
facility_all_fuel.loc[:,'plant id'] = facility_all_fuel.loc[:,'plant id'].astype(int)
Explanation: Combine total fuel use into one large dataframe
End of explanation
if __name__ == '__main__':
exception_list = []
facility_eg_fuel = pd.concat(Parallel(n_jobs=-1)(delayed(facility_line_to_df)(json.loads(row)) for row in eg_fuel_rows))
facility_eg_fuel.reset_index(drop=True, inplace=True)
facility_eg_fuel.rename({'value':'elec fuel (mmbtu)'}, axis=1, inplace=True)
facility_eg_fuel.loc[:,'lat'] = facility_eg_fuel.loc[:,'lat'].astype(float)
facility_eg_fuel.loc[:,'lon'] = facility_eg_fuel.loc[:,'lon'].astype(float)
facility_eg_fuel.loc[:,'plant id'] = facility_eg_fuel.loc[:,'plant id'].astype(int)
Explanation: Combine total fuel use for electricity into one large dataframe
End of explanation
keep_cols = ['fuel', 'generation (MWh)', 'month', 'plant id', 'prime mover', 'year',
'geography', 'lat', 'lon', 'last_updated']
merge_cols = ['fuel', 'month', 'plant id', 'year']
gen_total_fuel = facility_all_fuel.merge(facility_gen.loc[:,keep_cols],
how='outer', on=merge_cols)
Explanation: Merge dataframes
Need to be careful here because there are fuel/prime mover combinations that have generation but no fuel use (e.g. the steam cycle of a combined cycle system - but only in some cases).
End of explanation
def fill_missing(df):
cols = [col[:-2] for col in df.columns if '_x' in col]
# Create new column from the _x version, fill missing values from the _y version
for col in cols:
df[col] = df.loc[:, col + '_x']
df.loc[df[col].isnull(), col] = df.loc[df[col].isnull(), col + '_y']
df.drop([col+'_x', col+'_y'], axis=1, inplace=True)
fill_missing(gen_total_fuel)
keep_cols = ['fuel', 'elec fuel (mmbtu)', 'month', 'plant id', 'prime mover', 'year',
'geography', 'lat', 'lon', 'last_updated']
all_facility_data = gen_total_fuel.merge(facility_eg_fuel.loc[:,keep_cols],
how='outer', on=merge_cols)
Explanation: Fill in missing values from the first merge
End of explanation
fill_missing(all_facility_data)
all_facility_data.drop(['units', 'series_id'], axis=1, inplace=True)
all_facility_data.head()
Explanation: FIll in missing values from second merge and drop units/series_id columns
End of explanation
add_quarter(all_facility_data)
Explanation: Add datetime and quarter columns
End of explanation
path = join(data_path, 'Final emission factors.csv')
ef = pd.read_csv(path, index_col=0)
Explanation: Load emission factors
These are mostly EIA emission factors
End of explanation
fossil_factors = dict(zip(ef.index, ef['Fossil Factor']))
total_factors = dict(zip(ef.index, ef['Total Factor']))
fossil_factors, total_factors
Explanation: Apply factors to facility generation
End of explanation
# Start with 0 emissions in all rows
# For fuels where we have an emission factor, replace the 0 with the calculated value
all_facility_data['all fuel fossil CO2 (kg)'] = 0
all_facility_data['elec fuel fossil CO2 (kg)'] = 0
all_facility_data['all fuel total CO2 (kg)'] = 0
all_facility_data['elec fuel total CO2 (kg)'] = 0
for fuel in total_factors.keys():
# All fuel CO2 emissions
all_facility_data.loc[all_facility_data['fuel']==fuel,'all fuel fossil CO2 (kg)'] = \
all_facility_data.loc[all_facility_data['fuel']==fuel,'total fuel (mmbtu)'] * fossil_factors[fuel]
all_facility_data.loc[all_facility_data['fuel']==fuel,'all fuel total CO2 (kg)'] = \
all_facility_data.loc[all_facility_data['fuel']==fuel,'total fuel (mmbtu)'] * total_factors[fuel]
# Electric fuel CO2 emissions
all_facility_data.loc[all_facility_data['fuel']==fuel,'elec fuel fossil CO2 (kg)'] = \
all_facility_data.loc[all_facility_data['fuel']==fuel,'elec fuel (mmbtu)'] * fossil_factors[fuel]
all_facility_data.loc[all_facility_data['fuel']==fuel,'elec fuel total CO2 (kg)'] = \
all_facility_data.loc[all_facility_data['fuel']==fuel,'elec fuel (mmbtu)'] * total_factors[fuel]
Explanation: Apply emission factors
Fuel emission factor is kg/mmbtu
End of explanation
# Fossil CO2
all_facility_data.loc[~(all_facility_data['all fuel fossil CO2 (kg)']>=0),
'all fuel fossil CO2 (kg)'] = 0
all_facility_data.loc[~(all_facility_data['elec fuel fossil CO2 (kg)']>=0),
'elec fuel fossil CO2 (kg)'] = 0
# Total CO2
all_facility_data.loc[~(all_facility_data['all fuel total CO2 (kg)']>=0),
'all fuel total CO2 (kg)'] = 0
all_facility_data.loc[~(all_facility_data['elec fuel total CO2 (kg)']>=0),
'elec fuel total CO2 (kg)'] = 0
Explanation: Set nan and negative emissions to 0
When no fuel was used for electricity production, or when negative fuel is somehow reported by EIA, set the emissions to 0. This is implemented by filtering out all values that are greater than or equal to 0.
End of explanation
path = join(data_path, 'Derived data',
'Facility gen fuels and CO2 {}.csv'.format(file_date))
all_facility_data.to_csv(path, index=False)
Explanation: Export
End of explanation |
15,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resultant
If $p$ and $q$ are two polynomials over a commutative ring with identity which can be factored into linear factors,
$$p(x)= a_0 (x - r_1) (x- r_2) \dots (x - r_m) $$
$$q(x)=b_0 (x - s_1)(x - s_2) \dots (x - s_n)$$
then the resultant $R(p,q)$ of $p$ and $q$ is defined as
Step1: A common root exists.
Step2: A common root does not exist.
Step3: Example
Step4: Three roots for x $\in {-3, 0, 1}$.
For $x=-3$ then $y=1$.
Step5: For $x=0$ the $y=1$.
Step6: For $x=1$ then $y=-1$ is the common root,
Step7: Example | Python Code:
x = sym.symbols('x')
Explanation: Resultant
If $p$ and $q$ are two polynomials over a commutative ring with identity which can be factored into linear factors,
$$p(x)= a_0 (x - r_1) (x- r_2) \dots (x - r_m) $$
$$q(x)=b_0 (x - s_1)(x - s_2) \dots (x - s_n)$$
then the resultant $R(p,q)$ of $p$ and $q$ is defined as:
$$R(p,q)=a^n_{0}b^m_{0}\prod_{i=1}^{m}\prod_{j=1}^{n}(r_i - s_j)$$
Since the resultant is a symmetric function of the roots of the polynomials $p$ and $q$, it can be expressed as a polynomial in the coefficients of $p$ and $q$.
From the definition, it is clear that the resultant will equal zero if and only if $p$ and $q$ have at least one common root. Thus, the resultant becomes very useful in identifying whether common roots exist.
Sylvester's Resultant
It was proven that the determinant of the Sylvester's matrix is equal to the resultant. Assume the two polynomials:
$$p(x) = a_0 x_m + a_1 x_{m-1}+\dots+a_{m-1}x+a_m$$
$$q(x)=b_0x_n + b_1x_{n-1}+\dots+b_{n-1}x+b_n$$
Then the Sylverster matrix in the $(m+n)\times(m+n)$ matrix:
$$
\left|
\begin{array}{cccccc}
a_{0} & a_{1} & a_{2} & \ldots & a_{m} & 0 & \ldots &0 \
0 & a_{0} & a_{1} & \ldots &a_{m-1} & a_{m} & \ldots &0 \
\vdots & \ddots & \ddots& \ddots& \ddots& \ddots& \ddots&\vdots \
0 & 0 & \ddots & \ddots& \ddots& \ddots& \ddots&a_{m}\
b_{0} & b_{1} & b_{2} & \ldots & b_{n} & 0 & \ldots & 0 \
0 & b_{0} & b_{1} & \ldots & b_{n-1} & b_{n} & \ldots & 0\
\ddots &\ddots & \ddots& \ddots& \ddots& \ddots& \ddots&\ddots \
0 & 0 & \ldots& \ldots& \ldots& \ldots& \ldots& b_{n}\
\end{array}
\right| = \Delta $$
Thus $\Delta$ is equal to the $R(p, q)$.
Example: Existence of common roots
Two examples are consider here. Note that if the system has a common root we are expecting the resultant/determinant to equal to zero.
End of explanation
f = x ** 2 - 5 * x + 6
g = x ** 2 - 3 * x + 2
f, g
subresultants_qq_zz.sylvester(f, g, x)
subresultants_qq_zz.sylvester(f, g, x).det()
Explanation: A common root exists.
End of explanation
z = x ** 2 - 7 * x + 12
h = x ** 2 - x
z, h
matrix = subresultants_qq_zz.sylvester(z, h, x)
matrix
matrix.det()
Explanation: A common root does not exist.
End of explanation
y = sym.symbols('y')
f = x ** 2 + x * y + 2 * x + y -1
g = x ** 2 + 3 * x - y ** 2 + 2 * y - 1
f, g
matrix = subresultants_qq_zz.sylvester(f, g, y)
matrix
matrix.det().factor()
Explanation: Example: Two variables, eliminator
When we have system of two variables we solve for one and the second is kept as a coefficient.Thus we can find the roots of the equations, that is why the resultant is often refeered to as the eliminator.
End of explanation
f.subs({x:-3}).factor(), g.subs({x:-3}).factor()
f.subs({x:-3, y:1}), g.subs({x:-3, y:1})
Explanation: Three roots for x $\in {-3, 0, 1}$.
For $x=-3$ then $y=1$.
End of explanation
f.subs({x:0}).factor(), g.subs({x:0}).factor()
f.subs({x:0, y:1}), g.subs({x:0, y:1})
Explanation: For $x=0$ the $y=1$.
End of explanation
f.subs({x:1}).factor(), g.subs({x:1}).factor()
f.subs({x:1, y:-1}), g.subs({x:1, y:-1})
f.subs({x:1, y:3}), g.subs({x:1, y:3})
Explanation: For $x=1$ then $y=-1$ is the common root,
End of explanation
a = sym.IndexedBase("a")
b = sym.IndexedBase("b")
f = a[1] * x + a[0]
g = b[2] * x ** 2 + b[1] * x + b[0]
matrix = subresultants_qq_zz.sylvester(f, g, x)
matrix.det()
Explanation: Example: Generic Case
End of explanation |
15,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Measures of Central Tendency
By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Mackenzie.
Part of the Quantopian Lecture Series
Step1: We can also define a <i>weighted</i> arithmetic mean, which is useful for explicitly specifying the number of times each observation should be counted. For instance, in computing the average value of a portfolio, it is more convenient to say that 70% of your stocks are of type X rather than making a list of every share you hold.
The weighted arithmetic mean is defined as
$$\sum_{i=1}^n w_i X_i $$
where $\sum_{i=1}^n w_i = 1$. In the usual arithmetic mean, we have $w_i = 1/n$ for all $i$.
Median
The median of a set of data is the number which appears in the middle of the list when it is sorted in increasing or decreasing order. When we have an odd number $n$ of data points, this is simply the value in position $(n+1)/2$. When we have an even number of data points, the list splits in half and there is no item in the middle; so we define the median as the average of the values in positions $n/2$ and $(n+2)/2$.
The median is less affected by extreme values in the data than the arithmetic mean. It tells us the value that splits the data set in half, but not how much smaller or larger the other values are.
Step2: Mode
The mode is the most frequently occuring value in a data set. It can be applied to non-numerical data, unlike the mean and the median. One situation in which it is useful is for data whose possible values are independent. For example, in the outcomes of a weighted die, coming up 6 often does not mean it is likely to come up 5; so knowing that the data set has a mode of 6 is more useful than knowing it has a mean of 4.5.
Step3: For data that can take on many different values, such as returns data, there may not be any values that appear more than once. In this case we can bin values, like we do when constructing a histogram, and then find the mode of the data set where each value is replaced with the name of its bin. That is, we find which bin elements fall into most often.
Step4: Geometric mean
While the arithmetic mean averages using addition, the geometric mean uses multiplication
Step5: What if we want to compute the geometric mean when we have negative observations? This problem is easy to solve in the case of asset returns, where our values are always at least $-1$. We can add 1 to a return $R_t$ to get $1 + R_t$, which is the ratio of the price of the asset for two consecutive periods (as opposed to the percent change between the prices, $R_t$). This quantity will always be nonnegative. So we can compute the geometric mean return,
$$ R_G = \sqrt[T]{(1 + R_1)\ldots (1 + R_T)} - 1$$
Step6: The geometric mean is defined so that if the rate of return over the whole time period were constant and equal to $R_G$, the final price of the security would be the same as in the case of returns $R_1, \ldots, R_T$.
Step7: Harmonic mean
The harmonic mean is less commonly used than the other types of means. It is defined as
$$ H = \frac{n}{\sum_{i=1}^n \frac{1}{X_i}} $$
As with the geometric mean, we can rewrite the harmonic mean to look like an arithmetic mean. The reciprocal of the harmonic mean is the arithmetic mean of the reciprocals of the observations | Python Code:
# Two useful statistical libraries
import scipy.stats as stats
import numpy as np
# We'll use these two data sets as examples
x1 = [1, 2, 2, 3, 4, 5, 5, 7]
x2 = x1 + [100]
print 'Mean of x1:', sum(x1), '/', len(x1), '=', np.mean(x1)
print 'Mean of x2:', sum(x2), '/', len(x2), '=', np.mean(x2)
Explanation: Measures of Central Tendency
By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Mackenzie.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
In this notebook we will discuss ways to summarize a set of data using a single number. The goal is to capture information about the distribution of data.
Arithmetic mean
The arithmetic mean is used very frequently to summarize numerical data, and is usually the one assumed to be meant by the word "average." It is defined as the sum of the observations divided by the number of observations:
$$\mu = \frac{\sum_{i=1}^N X_i}{N}$$
where $X_1, X_2, \ldots , X_N$ are our observations.
End of explanation
print 'Median of x1:', np.median(x1)
print 'Median of x2:', np.median(x2)
Explanation: We can also define a <i>weighted</i> arithmetic mean, which is useful for explicitly specifying the number of times each observation should be counted. For instance, in computing the average value of a portfolio, it is more convenient to say that 70% of your stocks are of type X rather than making a list of every share you hold.
The weighted arithmetic mean is defined as
$$\sum_{i=1}^n w_i X_i $$
where $\sum_{i=1}^n w_i = 1$. In the usual arithmetic mean, we have $w_i = 1/n$ for all $i$.
Median
The median of a set of data is the number which appears in the middle of the list when it is sorted in increasing or decreasing order. When we have an odd number $n$ of data points, this is simply the value in position $(n+1)/2$. When we have an even number of data points, the list splits in half and there is no item in the middle; so we define the median as the average of the values in positions $n/2$ and $(n+2)/2$.
The median is less affected by extreme values in the data than the arithmetic mean. It tells us the value that splits the data set in half, but not how much smaller or larger the other values are.
End of explanation
# Scipy has a built-in mode function, but it will return exactly one value
# even if two values occur the same number of times, or if no value appears more than once
print 'One mode of x1:', stats.mode(x1)[0][0]
# So we will write our own
def mode(l):
# Count the number of times each element appears in the list
counts = {}
for e in l:
if e in counts:
counts[e] += 1
else:
counts[e] = 1
# Return the elements that appear the most times
maxcount = 0
modes = {}
for (key, value) in counts.items():
if value > maxcount:
maxcount = value
modes = {key}
elif value == maxcount:
modes.add(key)
if maxcount > 1 or len(l) == 1:
return list(modes)
return 'No mode'
print 'All of the modes of x1:', mode(x1)
Explanation: Mode
The mode is the most frequently occuring value in a data set. It can be applied to non-numerical data, unlike the mean and the median. One situation in which it is useful is for data whose possible values are independent. For example, in the outcomes of a weighted die, coming up 6 often does not mean it is likely to come up 5; so knowing that the data set has a mode of 6 is more useful than knowing it has a mean of 4.5.
End of explanation
# Get return data for an asset and compute the mode of the data set
start = '2014-01-01'
end = '2015-01-01'
pricing = get_pricing('SPY', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:]
print 'Mode of returns:', mode(returns)
# Since all of the returns are distinct, we use a frequency distribution to get an alternative mode.
# np.histogram returns the frequency distribution over the bins as well as the endpoints of the bins
hist, bins = np.histogram(returns, 20) # Break data up into 20 bins
maxfreq = max(hist)
# Find all of the bins that are hit with frequency maxfreq, then print the intervals corresponding to them
print 'Mode of bins:', [(bins[i], bins[i+1]) for i, j in enumerate(hist) if j == maxfreq]
Explanation: For data that can take on many different values, such as returns data, there may not be any values that appear more than once. In this case we can bin values, like we do when constructing a histogram, and then find the mode of the data set where each value is replaced with the name of its bin. That is, we find which bin elements fall into most often.
End of explanation
# Use scipy's gmean function to compute the geometric mean
print 'Geometric mean of x1:', stats.gmean(x1)
print 'Geometric mean of x2:', stats.gmean(x2)
Explanation: Geometric mean
While the arithmetic mean averages using addition, the geometric mean uses multiplication:
$$ G = \sqrt[n]{X_1X_1\ldots X_n} $$
for observations $X_i \geq 0$. We can also rewrite it as an arithmetic mean using logarithms:
$$ \ln G = \frac{\sum_{i=1}^n \ln X_i}{n} $$
The geometric mean is always less than or equal to the arithmetic mean (when working with nonnegative observations), with equality only when all of the observations are the same.
End of explanation
# Add 1 to every value in the returns array and then compute R_G
ratios = returns + np.ones(len(returns))
R_G = stats.gmean(ratios) - 1
print 'Geometric mean of returns:', R_G
Explanation: What if we want to compute the geometric mean when we have negative observations? This problem is easy to solve in the case of asset returns, where our values are always at least $-1$. We can add 1 to a return $R_t$ to get $1 + R_t$, which is the ratio of the price of the asset for two consecutive periods (as opposed to the percent change between the prices, $R_t$). This quantity will always be nonnegative. So we can compute the geometric mean return,
$$ R_G = \sqrt[T]{(1 + R_1)\ldots (1 + R_T)} - 1$$
End of explanation
T = len(returns)
init_price = pricing[0]
final_price = pricing[T]
print 'Initial price:', init_price
print 'Final price:', final_price
print 'Final price as computed with R_G:', init_price*(1 + R_G)**T
Explanation: The geometric mean is defined so that if the rate of return over the whole time period were constant and equal to $R_G$, the final price of the security would be the same as in the case of returns $R_1, \ldots, R_T$.
End of explanation
print 'Harmonic mean of x1:', stats.hmean(x1)
print 'Harmonic mean of x2:', stats.hmean(x2)
Explanation: Harmonic mean
The harmonic mean is less commonly used than the other types of means. It is defined as
$$ H = \frac{n}{\sum_{i=1}^n \frac{1}{X_i}} $$
As with the geometric mean, we can rewrite the harmonic mean to look like an arithmetic mean. The reciprocal of the harmonic mean is the arithmetic mean of the reciprocals of the observations:
$$ \frac{1}{H} = \frac{\sum_{i=1}^n \frac{1}{X_i}}{n} $$
The harmonic mean for nonnegative numbers $X_i$ is always at most the geometric mean (which is at most the arithmetic mean), and they are equal only when all of the observations are equal.
End of explanation |
15,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting the data
Please look at the information in the get_data.ipynb notebook. You have to end up with swift.dkrz.de folder located somwere in your system. All data used in this examples are located in this folder.
Step1: First, as usuall load the mesh
Step2: Load data for one year
Step3: Select one month
Step4: Decide start and end points of the transect and plot the map of your future transect. Use %matplotlib notebook if you whant to be able to zoom to the map. NOTE! you have to have cartopy installed to make the plotting work.
Step5: We just use closest points for selection, so on the map blues dots show generated transect, while red points are model points that will be used for "interpolation" (we will just use nearest neighbor aproach).
Lets. now plot the transect from monthly data that we have extracted previously. There are plenty of parameters that you can control. Function plot_transect return instance of an image, so you can further modify it (font sizes and so on).
Step6: If instead of kilometers you would like to have lons or lats, below is the code that will help you to do so
Step7: Let's use it
Step8: Several transects at once
Let's get data for some other montsh, say April, July and October
Step9: Put all of them in to list
Step10: Now you can have several transects in a loop. Make sure you have changed number of rows (nrows), number of columns (ncols), figsize and other parameters at the top of the script | Python Code:
import sys
sys.path.append("../")
import pyfesom as pf
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
# %matplotlib notebook
%matplotlib inline
from matplotlib import cm
from netCDF4 import Dataset, MFDataset
basedir = '/mnt/lustre01/work/ab0995/a270088/DATA/swift.dkrz.de/'
Explanation: Getting the data
Please look at the information in the get_data.ipynb notebook. You have to end up with swift.dkrz.de folder located somwere in your system. All data used in this examples are located in this folder.
End of explanation
meshpath = basedir+'/COREII/'
mesh = pf.load_mesh(meshpath, usepickle=True)
Explanation: First, as usuall load the mesh:
End of explanation
fl = Dataset(basedir+'/COREII_data/fesom.1951.oce.mean.nc')
fl.variables['temp'].shape
Explanation: Load data for one year:
End of explanation
data = fl.variables['temp'][0,:]
Explanation: Select one month
End of explanation
lon_start = -15
lat_start = -70
lon_end = -15
lat_end = -20
pf.plot_transect_map(lon_start, lat_start, lon_end, lat_end,
mesh, npoints=30, view = 'w', stock_img=False)
Explanation: Decide start and end points of the transect and plot the map of your future transect. Use %matplotlib notebook if you whant to be able to zoom to the map. NOTE! you have to have cartopy installed to make the plotting work.
End of explanation
npoints = 80
fig, ax = plt.subplots(1,1, figsize=(15,7))
image = pf.plot_transect(data, mesh,
lon_start,
lat_start,
lon_end,
lat_end,
npoints=npoints,
levels = np.round(np.linspace(-2, 13, 42),2),
cmap=cm.Spectral_r,
maxdepth =6000,
title = 'Southern Ocean Transect',
ncols=1,
figsize=(5,10),
ax = ax
)
cb = fig.colorbar(image, orientation='horizontal', ax=ax, pad=0.13)
cb.set_label('deg C')
Explanation: We just use closest points for selection, so on the map blues dots show generated transect, while red points are model points that will be used for "interpolation" (we will just use nearest neighbor aproach).
Lets. now plot the transect from monthly data that we have extracted previously. There are plenty of parameters that you can control. Function plot_transect return instance of an image, so you can further modify it (font sizes and so on).
End of explanation
npoints = 80
lonlat = pf.transect_get_lonlat(lon_start, lat_start, lon_end, lat_end, npoints=npoints)
labeles = [str(abs(int(x)))+"$^{\circ}$S" for x in lonlat[7::8][:,1]]
labeles
Explanation: If instead of kilometers you would like to have lons or lats, below is the code that will help you to do so :) Pay attention to combination of npoints, lonlat[7::8] (were in this case 7 is a starting point and 8 is a step). The 1 in lonlat[7::8][:,1] is latitudes, to switch to longitudes change it to 0. This crazy thing $^{\circ}$S adds $^{\circ}$S, change to N, E or W, depending on what you would like to show. I know now it look ubly, in the future will try to make it more automatic. On the other hand you have control on what exactly you are ploting :)
End of explanation
npoints = 80
lonlat = pf.transect_get_lonlat(lon_start, lat_start, lon_end, lat_end, npoints=npoints)
labeles = [str(abs(int(x)))+"$^{\circ}$S" for x in lonlat[7::8][:,1]]
dist = pf.transect_get_distance(lonlat) # get's distances between starting point and present point
fig, ax = plt.subplots(1,1, figsize=(15,7))
image = pf.plot_transect(data, mesh,
lon_start,
lat_start,
lon_end,
lat_end,
npoints=npoints,
levels = np.round(np.linspace(-2, 13, 42),2),
cmap=cm.Spectral_r,
maxdepth =6000,
title = 'Southern Ocean Transect',
ncols=1,
figsize=(5,10),
ax = ax
)
cb = fig.colorbar(image, orientation='horizontal', ax=ax, pad=0.13)
cb.set_label('deg C')
ax.xaxis.set_ticks(dist[7::8])
ax.set_xticklabels(labeles, size=20);
Explanation: Let's use it:
End of explanation
data1 = fl.variables['temp'][3,:]
data2 = fl.variables['temp'][6,:]
data3 = fl.variables['temp'][9,:]
Explanation: Several transects at once
Let's get data for some other montsh, say April, July and October:
End of explanation
data_all = [data, data1, data2, data3]
Explanation: Put all of them in to list:
End of explanation
nrows = 2
ncols = 2
figsize = (15,10)
label = '$^{\circ}$C'
vmin = -2
vmax = 15
cmap = cm.Spectral_r
npoints = 100
cmap.set_bad(color = 'k', alpha = 1.)
lonlat = pf.transect_get_lonlat(lon_start, lat_start, lon_end, lat_end, npoints=npoints)
dist = pf.transect_get_distance(lonlat)
labeles = [str(abs(int(x)))+"$^{\circ}$S" for x in lonlat[9::10][:,1]]
months = ['JAN', 'APR', 'JUL', 'NOV']
fig, ax = plt.subplots(nrows,ncols, figsize=figsize)
ax = ax.flatten()
# for i, sim in enumerate(data):
for i, sim in enumerate(data_all):
image = pf.plot_transect(sim, mesh, lon_start,
lat_start,
lon_end,
lat_end,
npoints=npoints,
levels = np.round(np.linspace(-3, 7, 41),2),
cmap=cmap,
maxdepth =6000,
label = '$^{\circ}$C',
title = months.pop(0),
ncols=3,
ax=ax[i])
cb = fig.colorbar(image, orientation='horizontal', ax=ax[i], pad=0.16)
cb.set_label(label)
ax[i].xaxis.set_ticks(dist[9::10])
# ax.xaxis.set_ticks(list(range(lonlat.shape[0]))[9::10])
ax[i].set_xticklabels(labeles, size=10)
ax[i].set_xlabel(' ')
ax[i].xaxis.grid(True)
Explanation: Now you can have several transects in a loop. Make sure you have changed number of rows (nrows), number of columns (ncols), figsize and other parameters at the top of the script :)
End of explanation |
15,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SS82
Talking to Bruno about his project on stacking Swift observations and my project on Stripe82 SED we started to think about a collaboration to create a set of deep observations with Swift data. Deep Swift data being a great addition to the High-Energy end of Stripe82 data collection (see, for instance, LaMassa,2016).
Bruno then seached for all Swift-XRT observations inside the Stripe-82
Step1: The base catalog
Right below, in Table 1, we can see a sample of the catalog (the first/last five lines).
In Table 2, we see a brief description of the catalog.
Step2: Target_Name is the name of the (central) object at each observation, from that we see we have 681 unique sources out of the 3035 observations. GroupSize is the number of overlapping observations, the average number is ~54. Let's see how sparse are the observations in time and how do they distribute for each source.
Step3: Number of observations
To have a glue about the number of observations done over each object we can look the counts shown by Table 3 and the histogram below (Figure 2).
Step4: Filtering the data
First, a closer look to an example
To have a better idea of what we should find regarding the observation time of these sources, I'll take a particular one -- V1647ORI -- and see what we have for this source.
Step5: If we consider each group of observations of our interest -- let me call them "chunk" -- observations that distance each other no more than "X" days (for example, X=20 days) we see from this example that it happens to exist more than one "chunk" of observations per object. Here, for instance, rows 347,344,343,346 and 338,339,336,335,341 form the cluster of observations of our interest, "chunk-1" and "chunk-2", respectively.
To select the candidates we need to run a window function over the 'start_time' sorted list, where the function has two elements (i.e, observations) to ask their distance in time. If the pair of observations is less than, say 20 days, they are selected for future processing.
Applying the filter to all objects
Now defining a 20 days window as the selection criterium to all objects in our catalog we end up with 2254 observations, done over 320 objects.
Table 5 add such information through column "obs_chunk", where "Not-Available" value means the observations that have not succeed in the filtering applied.
Note
Step6: Filtered catalog
And here is the final catalog, where rows (i.e, observations) with out of our interest (i.e, "obs_chunk == NaN") were removed. This catalog is written to 'Swift_Master_Stripe82_groups_filtered.csv'. | Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
from IPython.display import HTML
HTML('''
<figure>
<img src="Stripe82_gal_projection.png" alt="Swift observations over Stripe82">
<figcaption>Figure 1: Swift observations over Stripe82</figcaption>
</figure>
''')
Explanation: SS82
Talking to Bruno about his project on stacking Swift observations and my project on Stripe82 SED we started to think about a collaboration to create a set of deep observations with Swift data. Deep Swift data being a great addition to the High-Energy end of Stripe82 data collection (see, for instance, LaMassa,2016).
Bruno then seached for all Swift-XRT observations inside the Stripe-82:
* RA: 310 : 60
* Dec: -1.25 : 1.25
Over the Stripe, Bruno has found ~3000 observations. See Figure 1 and tables Table 1 and Table 2.
Here, I'll do the filtering of the observations to keep only those useful for Paolo's stacking.
The selection looks for observations done within a time-range of a few days; for instance, 20 days is the window size I'll use here.
If all you want is to have a look at the final/filtered catalog, go straight to final section.
Otherwise, if the code used in this filtering does matter to you, you can show them out clicking the button below.
End of explanation
import pandas
cat = pandas.read_csv('Swift_Master_Stripe82_groups.ascii',
delim_whitespace=True)
print "Table 1: Sample of the catalog"
pandas.concat([cat.head(5),cat.tail(5)])
print "Table 2: Summary of the catalog columns"
cat.describe(include='all')
Explanation: The base catalog
Right below, in Table 1, we can see a sample of the catalog (the first/last five lines).
In Table 2, we see a brief description of the catalog.
End of explanation
cat['start_time'] = pandas.to_datetime(cat['start_time'])
cat_grouped_by_target = cat[['Target_Name','start_time']].groupby(['Target_Name'])
cat_descr = cat_grouped_by_target.describe().unstack()
cat_time = cat_descr.sort_values([('start_time','count')],ascending=False)
del cat_descr
Explanation: Target_Name is the name of the (central) object at each observation, from that we see we have 681 unique sources out of the 3035 observations. GroupSize is the number of overlapping observations, the average number is ~54. Let's see how sparse are the observations in time and how do they distribute for each source.
End of explanation
title = "Figure 2: Number of sources(Y axis) observed number of times(X axis)"
%matplotlib inline
from matplotlib import pyplot as plt
width = 16
height = 4
plt.figure(figsize=(width, height))
yticks = [2,10,50,100,200,300]
xticks = range(51)
ax = cat_time[('start_time','count')].plot.hist(bins=xticks,xlim=(0,50),title=title,grid=True,xticks=xticks,yticks=yticks,align='left')
ax.set_xlabel('Number of observations (per source)')
print "Table 3: Number counts and dates (first/last) of the observations (per object)"
cat_time
Explanation: Number of observations
To have a glue about the number of observations done over each object we can look the counts shown by Table 3 and the histogram below (Figure 2).
End of explanation
print "Table 4: Observation carried out for source 'V1647ORI' sorted in time"
g = cat_grouped_by_target.get_group('V1647ORI')
g_sorted = g.sort_values('start_time')
g_sorted
Explanation: Filtering the data
First, a closer look to an example
To have a better idea of what we should find regarding the observation time of these sources, I'll take a particular one -- V1647ORI -- and see what we have for this source.
End of explanation
def find_clustered_observations(sorted_target_observations,time_range=10):
# Let's select a 'time_range' days window to select valid observations
window_size = time_range
g_sorted = sorted_target_observations
# an ordered dictionary works as a 'set' structure
from collections import OrderedDict
selected_allObs = OrderedDict()
# define en identificator for each cluster of observations, to ease future filtering
group_obs = 1
_last_time = None
_last_id = None
for _row in g_sorted.iterrows():
ind,row = _row
if _last_time is None:
_last_time = row.start_time
_last_id = ind
continue
_delta = row.start_time - _last_time
if _delta.days <= window_size:
selected_allObs[_last_id] = group_obs
selected_allObs[ind] = group_obs
else:
if len(selected_allObs):
group_obs = selected_allObs.values()[-1] + 1
_last_time = row.start_time
_last_id = ind
return selected_allObs
from collections import OrderedDict
obs_indx = OrderedDict()
for name,group in cat_grouped_by_target:
g_sorted = group.sort_values('start_time')
filtered_indxs = find_clustered_observations(g_sorted,time_range=20)
obs_indx.update(filtered_indxs)
import pandas
obsChunks_forFilteringCat = pandas.DataFrame(obs_indx.values(),columns=['obs_chunk'],index=obs_indx.keys())
# obsChunks_forFilteringCat.sort_index()
print "Table 5: original catalog with column 'obs_chunk' to flag which rows succeed the filtering (non-NA values)."
cat_with_obsChunksFlag = cat.join(obsChunks_forFilteringCat)
cols = list(cat_with_obsChunksFlag.columns)
cols.insert(2,cols.pop(-1))
cat_with_obsChunksFlag = cat_with_obsChunksFlag.ix[:,cols]
cat_with_obsChunksFlag
Explanation: If we consider each group of observations of our interest -- let me call them "chunk" -- observations that distance each other no more than "X" days (for example, X=20 days) we see from this example that it happens to exist more than one "chunk" of observations per object. Here, for instance, rows 347,344,343,346 and 338,339,336,335,341 form the cluster of observations of our interest, "chunk-1" and "chunk-2", respectively.
To select the candidates we need to run a window function over the 'start_time' sorted list, where the function has two elements (i.e, observations) to ask their distance in time. If the pair of observations is less than, say 20 days, they are selected for future processing.
Applying the filter to all objects
Now defining a 20 days window as the selection criterium to all objects in our catalog we end up with 2254 observations, done over 320 objects.
Table 5 add such information through column "obs_chunk", where "Not-Available" value means the observations that have not succeed in the filtering applied.
Note: obs_chunk values mean the groupings -- "chunks" -- formed within each object's set of observations. They are unique among each object's observations, but not accross the entire catalog.
End of explanation
cat_filtered = cat_with_obsChunksFlag.dropna(subset=['obs_chunk'])
cat_filtered
cat_filtered.describe(include='all')
cat_filtered.to_csv('Swift_Master_Stripe82_groups_filtered.csv')
Explanation: Filtered catalog
And here is the final catalog, where rows (i.e, observations) with out of our interest (i.e, "obs_chunk == NaN") were removed. This catalog is written to 'Swift_Master_Stripe82_groups_filtered.csv'.
End of explanation |
15,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Matrix
Step2: Find Maximum Element
Step3: Find Minimum Element
Step4: Find Maximum Element By Column
Step5: Find Maximum Element By Row | Python Code:
# Load library
import numpy as np
Explanation: Title: Find The Maximum And Minimum
Slug: find_maximum_and_minimum
Summary: How to find the maximum, minimum, and average of the elements in an array.
Date: 2017-09-03 12:00
Category: Machine Learning
Tags: Vectors Matrices Arrays
Authors: Chris Albon
Preliminaries
End of explanation
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Explanation: Create Matrix
End of explanation
# Return maximum element
np.max(matrix)
Explanation: Find Maximum Element
End of explanation
# Return minimum element
np.min(matrix)
Explanation: Find Minimum Element
End of explanation
# Find the maximum element in each column
np.max(matrix, axis=0)
Explanation: Find Maximum Element By Column
End of explanation
# Find the maximum element in each row
np.max(matrix, axis=1)
Explanation: Find Maximum Element By Row
End of explanation |
15,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
1. Logistic regression
The simple network we created is similar to a logistic regression model. Verify that the accuracy is close to that of sklearn.linear_model.LogisticRegression.
Step1: 2. Hidden layer
Try adding one or more "hidden" DenseLayers between the input and output. Experiment with different numbers of hidden units.
Step2: 3. Optimizer
Try one of the other algorithms available in lasagne.updates. You may also want to adjust the learning rate.
Visualize and compare the trained weights. Different optimization trajectories may lead to very different results, even if the performance is similar. This can be important when training more complicated networks. | Python Code:
# Uncomment and execute this cell for an example solution
load spoilers/logreg.py
Explanation: Exercises
1. Logistic regression
The simple network we created is similar to a logistic regression model. Verify that the accuracy is close to that of sklearn.linear_model.LogisticRegression.
End of explanation
# Uncomment and execute this cell for an example solution
load spoilers/hiddenlayer.py
Explanation: 2. Hidden layer
Try adding one or more "hidden" DenseLayers between the input and output. Experiment with different numbers of hidden units.
End of explanation
# Uncomment and execute this cell for an example solution
load spoilers/optimizer.py
Explanation: 3. Optimizer
Try one of the other algorithms available in lasagne.updates. You may also want to adjust the learning rate.
Visualize and compare the trained weights. Different optimization trajectories may lead to very different results, even if the performance is similar. This can be important when training more complicated networks.
End of explanation |
15,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
회귀 분석용 가상 데이터 생성 방법
Scikit-learn 의 datasets 서브 패키지에는 회귀 분석 시험용 가상 데이터를 생성하는 명령어인 make_regression() 이 있다.
http
Step1: 위 선형 모형은 다음과 같다.
$$
y = 100 + 79.1725 x
$$
noise 인수를 증가시키면 $\text{Var}[e]$가 증가하고 bias 인수를 증가시키면 y 절편이 증가한다.
Step2: 이번에는 n_features 즉, 독립 변수가 2개인 표본 데이터를 생성하여 스캐터 플롯을 그리면 다음과 같다. 종속 변수 값은 점의 명암으로 표시하였다.
Step3: 만약 실제로 y값에 영향을 미치는 독립 변수는 하나 뿐이라면 다음과 같이 사용한다.
Step4: 만약 두 독립 변수가 상관관계가 있다면 다음과 같이 생성하고 스캐터 플롯에서도 이를 알아볼 수 있다. | Python Code:
from sklearn.datasets import make_regression
X, y, c = make_regression(n_samples=10, n_features=1, bias=0, noise=0, coef=True, random_state=0)
print("X\n", X)
print("y\n", y)
print("c\n", c)
plt.scatter(X, y, s=100)
plt.show()
Explanation: 회귀 분석용 가상 데이터 생성 방법
Scikit-learn 의 datasets 서브 패키지에는 회귀 분석 시험용 가상 데이터를 생성하는 명령어인 make_regression() 이 있다.
http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_regression.html
입출력 요소
make_regression()는 다음과 같은 입출력 요소를 가진다.
입력
n_samples : 정수 (옵션, 디폴트 100)
표본의 갯수
n_features : 정수 (옵션, 디폴트 100)
독립 변수(feature)의 수(차원)
n_targets : 정수 (옵션, 디폴트 1)
종속 변수(target)의 수(차원)
n_informative : 정수 (옵션, 디폴트 10)
독립 변수(feature) 중 실제로 종속 변수와 상관 관계가 있는 독립 변수의 수(차원)
effective_rank: 정수 또는 None (옵션, 디폴트 None)
독립 변수(feature) 중 서로 독립인 독립 변수의 수. 만약 None이면 모두 독립
tail_strength : 0부터 1사이의 실수 (옵션, 디폴트 0.5)
effective_rank가 None이 아닌 경우 독립 변수간의 상관 관계 형태를 결정하는 변수
bias : 실수 (옵션, 디폴트 0.0)
절편
noise : 실수 (옵션, 디폴트 0.0)
출력 즉, 종속 변수에 더해지는 정규 분포의 표준 편차
coef : 불리언 (옵션, 디폴트 False)
True 이면 선형 모형의 계수도 출력
random_state : 정수 (옵션, 디폴트 None)
난수 발생용 시작값
출력
X : [n_samples, n_features] 형상의 2차원 배열
독립 변수의 표본 데이터
y : [n_samples] 형상의 1차원 배열 또는 [n_samples, n_targets] 형상의 2차원 배열
종속 변수의 표본 데이터
coef : [n_features] 형상의 1차원 배열 또는 [n_features, n_targets] 형상의 2차원 배열 (옵션)
선형 모형의 계수, 입력 인수 coef가 True 인 경우에만 출력됨
예를 들어 독립 변수가 1개, 종속 변수가 1개 즉, 선형 모형이 다음과 같은 수식은 경우
$$ y = C_0 + C_1 x + e $$
이러한 관계를 만족하는 표본 데이터는 다음과 같이 생성한다.
End of explanation
X, y, c = make_regression(n_samples=50, n_features=1, bias=100, noise=10, coef=True, random_state=0)
plt.scatter(X, y, s=100)
plt.show()
Explanation: 위 선형 모형은 다음과 같다.
$$
y = 100 + 79.1725 x
$$
noise 인수를 증가시키면 $\text{Var}[e]$가 증가하고 bias 인수를 증가시키면 y 절편이 증가한다.
End of explanation
X, y, c = make_regression(n_samples=300, n_features=2, noise=10, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
Explanation: 이번에는 n_features 즉, 독립 변수가 2개인 표본 데이터를 생성하여 스캐터 플롯을 그리면 다음과 같다. 종속 변수 값은 점의 명암으로 표시하였다.
End of explanation
X, y, c = make_regression(n_samples=300, n_features=2, n_informative=1, noise=0, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
Explanation: 만약 실제로 y값에 영향을 미치는 독립 변수는 하나 뿐이라면 다음과 같이 사용한다.
End of explanation
X, y, c = make_regression(n_samples=300, n_features=2, effective_rank=1, noise=0, tail_strength=0, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
X, y, c = make_regression(n_samples=300, n_features=2, effective_rank=1, noise=0, tail_strength=1, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
Explanation: 만약 두 독립 변수가 상관관계가 있다면 다음과 같이 생성하고 스캐터 플롯에서도 이를 알아볼 수 있다.
End of explanation |
15,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyBroMo - 2. Generate smFRET data, including mixtures
<small><i>
This notebook is part of <a href="http
Step1: Create smFRET data-files
Create a file for a single FRET efficiency
In this section we show how to save a single smFRET data file. In the next section we will perform the same steps in a loop to generate a sequence of smFRET data files.
Here we load a diffusion simulation opening a file to save
timstamps in write mode. Use 'a' (i.e. append) to keep
previously simulated timestamps for the given diffusion.
Step2: Simulate timestamps of smFRET
Example1
Step3: Create the object that will run the simulation and print a summary
Step4: Run the simualtion
Step5: Save simulation to a smFRET Photon-HDF5 file
Step6: Example 2
Step7: Burst analysis
The generated Photon-HDF5 files can be analyzed by any smFRET burst
analysis program. Here we show an example using the opensource
FRETBursts program
Step8: NOTE | Python Code:
%matplotlib inline
from pathlib import Path
import numpy as np
import tables
import matplotlib.pyplot as plt
import seaborn as sns
import pybromo as pbm
print('Numpy version:', np.__version__)
print('PyTables version:', tables.__version__)
print('PyBroMo version:', pbm.__version__)
Explanation: PyBroMo - 2. Generate smFRET data, including mixtures
<small><i>
This notebook is part of <a href="http://tritemio.github.io/PyBroMo" target="_blank">PyBroMo</a> a
python-based single-molecule Brownian motion diffusion simulator
that simulates confocal smFRET
experiments.
</i></small>
Overview
In this notebook we show how to generated smFRET data files from the diffusion trajectories.
Loading the software
Import all the relevant libraries:
End of explanation
S = pbm.ParticlesSimulation.from_datafile('0168', mode='w')
S.particles.diffusion_coeff_counts
#S = pbm.ParticlesSimulation.from_datafile('0168')
Explanation: Create smFRET data-files
Create a file for a single FRET efficiency
In this section we show how to save a single smFRET data file. In the next section we will perform the same steps in a loop to generate a sequence of smFRET data files.
Here we load a diffusion simulation opening a file to save
timstamps in write mode. Use 'a' (i.e. append) to keep
previously simulated timestamps for the given diffusion.
End of explanation
params = dict(
em_rates = (200e3,), # Peak emission rates (cps) for each population (D+A)
E_values = (0.75,), # FRET efficiency for each population
num_particles = (20,), # Number of particles in each population
bg_rate_d = 1500, # Poisson background rate (cps) Donor channel
bg_rate_a = 800, # Poisson background rate (cps) Acceptor channel
)
Explanation: Simulate timestamps of smFRET
Example1: single FRET population
Define the simulation parameters with the following syntax:
End of explanation
mix_sim = pbm.TimestapSimulation(S, **params)
mix_sim.summarize()
Explanation: Create the object that will run the simulation and print a summary:
End of explanation
rs = np.random.RandomState(1234)
mix_sim.run(rs=rs, overwrite=False, skip_existing=True)
Explanation: Run the simualtion:
End of explanation
mix_sim.save_photon_hdf5(identity=dict(author='John Doe',
author_affiliation='Planet Mars'))
Explanation: Save simulation to a smFRET Photon-HDF5 file:
End of explanation
params = dict(
em_rates = (200e3, 180e3), # Peak emission rates (cps) for each population (D+A)
E_values = (0.75, 0.35), # FRET efficiency for each population
num_particles = (20, 15), # Number of particles in each population
bg_rate_d = 1500, # Poisson background rate (cps) Donor channel
bg_rate_a = 800, # Poisson background rate (cps) Acceptor channel
)
mix_sim = pbm.TimestapSimulation(S, **params)
mix_sim.summarize()
rs = np.random.RandomState(1234)
mix_sim.run(rs=rs, overwrite=False, skip_existing=True)
mix_sim.save_photon_hdf5()
Explanation: Example 2: 2 FRET populations
To simulate 2 population we just define the parameters with
one value per population, except for the Poisson background
rate that is a single value for each channel.
End of explanation
import fretbursts as fb
filepath = list(Path('./').glob('smFRET_*'))
filepath
d = fb.loader.photon_hdf5(str(filepath[1]))
d
d.A_em
fb.dplot(d, fb.timetrace);
d.calc_bg(fun=fb.bg.exp_fit, tail_min_us='auto', F_bg=1.7)
d.bg_dd, d.bg_ad
d.burst_search(F=7)
d.num_bursts
ds = d.select_bursts(fb.select_bursts.size, th1=20)
ds.num_bursts
fb.dplot(d, fb.timetrace, bursts=True);
fb.dplot(ds, fb.hist_fret, pdf=False)
plt.axvline(0.75);
Explanation: Burst analysis
The generated Photon-HDF5 files can be analyzed by any smFRET burst
analysis program. Here we show an example using the opensource
FRETBursts program:
End of explanation
fb.bext.burst_data(ds)
Explanation: NOTE: Unless you simulated a diffusion of 30s or more the previous histogram will be very poor.
End of explanation |
15,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create some text
Step2: Apply regex | Python Code:
# Load regex package
import re
Explanation: Title: Match Any Character
Slug: match_any_character
Summary: Match Any Character
Date: 2016-05-01 12:00
Category: Regex
Tags: Basics
Authors: Chris Albon
Based on: Regular Expressions Cookbook
Preliminaries
End of explanation
# Create a variable containing a text string
text = 'The quick brown fox jumped over the lazy brown bear.'
Explanation: Create some text
End of explanation
# Find anything with a 'T' and then the next two characters
re.findall(r'T..', text)
Explanation: Apply regex
End of explanation |
15,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: Calling the original FORTRAN code
Step2: Visualization of the Green's function
Step3: Convolution
Let $S(t)$ be a general source time function, then the displacent seismogram is given in terms of the Green's function $G$ via
\begin{equation}
u(\mathbf{x},t) = G(\mathbf{x},t; \mathbf{x}',t') \ast S(t)
\end{equation}
Exercise
Compute the convolution of the source time function 'ricker' with the Green's function of a Vertical displacement due to vertical loads. Plot the resulting displacement. | Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
import os
from ricker import ricker
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
# Compile the source code (needs gfortran!)
!rm -rf lamb.exe output.txt input.txt
!gfortran canhfs.for -o lamb.exe
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)"> Lamb's problem </div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
The fundamental analytical solution to the three-dimensional Lamb’s problem, the problem of determining the elastic disturbance resulting from a point force in a homogeneous half space, is implemented in this Ipython notebook. This solution provides fundamental information used as benchmark for comparison with entirely numerical solutions. A setup of the fundamental problem is illustrated below. The figure on the right hand side is published in [1] (Figure 1. System of coordinates)
<p style="width:65%;float:right;padding-left:50px">
<img src=lambs_setup.png>
<span style="font-size:smaller">
</span>
</p>
Simulations of 3D elastic wave propagation need to be validated by the use of analytical solutions. In order to evaluate how healthy a numerical solution is, one may recreate conditions for which analytical solutions exist with the aim of reproducing and compare the different results.
We which to find the displacement wavefield $\mathbf{u}(\mathbf{x},t)$ at some distance $\mathbf{x}$ from a seismic source with $ \mathbf{F} = f_1\mathbf{\hat{x}_1} + f_2\mathbf{\hat{x}_2} + f_3\mathbf{\hat{x}_3}$.
For a uniform elastic material and a Cartesian co-ordinate system the equation for the conservation of linear momentum can be written
\begin{align}
\rho(x) \frac{\partial^2}{\partial t^2} \mathbf{u(\mathbf{x},t)} = (\lambda + \mu)\nabla(\nabla\mathbf{u(\mathbf{x},t)}) + \mu\nabla^2 \mathbf{u(\mathbf{x},t)} + \mathbf{f(\mathbf{x},t)}
\end{align}
We will consider the case where the source function is localized in both time and space
\begin{align}
\mathbf{f(\mathbf{x},t)} = (f_1\mathbf{\hat{x}1} + f_2\mathbf{\hat{x}_2} + f_3\mathbf{\hat{x}_3})\delta(x_1 - x^{'}{1})\delta(x_2 - x^{'}{2})\delta(x_3 - x^{'}{3})\delta(t - t^{'})
\end{align}
For such a source we will refer to the displacement solution as a Green’s function, and use the standard notation
\begin{align}
\mathbf{u(\mathbf{x},t)} = g_1(\mathbf{x},t;\mathbf{x^{'}},t^{'})\mathbf{\hat{x}_1} + g_2(\mathbf{x},t;\mathbf{x^{'}},t^{'})\mathbf{\hat{x}_2} + g_3(\mathbf{x},t;\mathbf{x^{'}},t^{'})\mathbf{\hat{x}_3}
\end{align}
The complete solution is found after applying the Laplace transform to the elastic wave equation, implementing the stress-free boundary condition, defining some transformations, and performing some algebraic manoeuvres. Then, the Green's function at the free surface is given:
\begin{align}
\begin{split}
\mathbf{G}(x_1,x_2,0,t;0,0,x^{'}{3},0) & = \dfrac{1}{\pi^2\mu r} \dfrac{\partial}{\partial t}\int{0}^{((t/r)^2 - \alpha^{-2})^{1/2}}\mathbf{H}(t-r/\alpha)\mathbb{R}[\eta_\alpha\sigma^{-1}((t/r)^2 - \alpha^{-2} - p^2)^{-1/2}\mathbf{M}(q,p,0,t,x^{'}{3})\mathbf{F}] dp \
& + \dfrac{1}{\pi^2\mu r} \dfrac{\partial}{\partial t}\int{0}^{p_2}\mathbf{H}(t-t_2)\mathbb{R}[\eta_\beta\sigma^{-1}((t/r)^2 - \beta^{-2} - p^2)^{-1/2}\mathbf{N}(q,p,0,t,x^{'}_{3})\mathbf{F}] dp
\end{split}
\end{align}
Details on the involved terms are found in the original paper [2]. The Green's $\mathbf{G}$ function consist of three components of displacement evolving from the application of three components of force $\mathbf{F}$. If we assume that each component of $\mathbf{F}$ provokes three components of displacement, then $\mathbf{G}$ is composed by nine independent components that correspond one to one to the matrices $\mathbf{M}$ and $\mathbf{N}$. Without losing generality it is shown that among them four are equal zero, and we end up only with five possible components.
<p style="text-align: justify;">
[1] Eduardo Kausel - Lamb's problem at its simplest, 2012</p>
<p style="text-align: justify;">
[2] Lane R. Johnson - Green’s Function for Lamb’s Problem, 1974</p>
End of explanation
# Initialization of setup:
# Figure 4 in Lane R. Johnson - Green’s Function for Lamb’s Problem, 1974
# is reproduced when the following parameters are given
# -----------------------------------------------------------------------------
r = 10.0 # km
vp = 8.0 # P-wave velocity km/s
vs = 4.62 # s-wave velocity km/s
rho = 3.3 # Density kg/m^3
nt = 512 # Number of time steps
dt = 0.01 # Time step s
h = 0.2 # Source position km (0.01 to reproduce Fig 2.16 of the book)
ti = 0.0 # Initial time s
var = [vp, vs, rho, nt, dt, h, r, ti]
# -----------------------------------------------------------------------------
# Execute fortran code
# -----------------------------------------------------------------------------
with open('input.txt', 'w') as f:
for i in var:
print(i, file=f, end=' ') # Write input for fortran code
f.close()
os.system("./lamb.exe") # Code execution
# -----------------------------------------------------------------------------
# Load the solution
# -----------------------------------------------------------------------------
G = np.genfromtxt('output.txt')
u_rx = G[:,0] # Radial displacement owing to horizontal load
u_tx = G[:,1] # Tangential displacement due to horizontal load
u_zx = G[:,2] # Vertical displacement owing to horizontal load
u_rz = G[:,3] # Radial displacement owing to a vertical load
u_zz = G[:,4] # Vertical displacement owing to vertical load
t = np.linspace(dt, nt*dt, nt) # Time axis
Explanation: Calling the original FORTRAN code
End of explanation
# Plotting
# -----------------------------------------------------------------------------
seis = [u_rx, u_tx, u_zx, u_rz, u_zz] # Collection of seismograms
labels = ['$u_{rx}(t) [cm]$','$u_{tx}(t)[cm]$','$u_{zx}(t)[cm]$','$u_{rz}(t)[cm]$','$u_{zz}(t)[cm]$']
cols = ['b','r','k','g','c']
# Initialize animated plot
fig = plt.figure(figsize=(12,8), dpi=80)
fig.suptitle("Green's Function for Lamb's problem", fontsize=16)
plt.ion() # set interective mode
plt.show()
for i in range(5):
st = seis[i]
ax = fig.add_subplot(2, 3, i+1)
ax.plot(t, st, lw = 1.5, color=cols[i])
ax.set_xlabel('Time(s)')
ax.text(0.8*nt*dt, 0.8*max(st), labels[i], fontsize=16)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
ax.spines['left'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
ax.spines['left'].set_smart_bounds(True)
ax.spines['bottom'].set_smart_bounds(True)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.show()
Explanation: Visualization of the Green's function
End of explanation
# call the source time function
T = 1/5 # Period
src = ricker(dt,T)
# Normalize source time function
src = src/max(src)
# Initialize source time function
f = np.zeros(nt)
f[0:int(2 * T/dt)] = src
# Compute convolution
u = np.convolve(u_zz, f)
u = u[0:nt]
# ---------------------------------------------------------------
# Plot Seismogram
# ---------------------------------------------------------------
fig = plt.figure(figsize=(12,4), dpi=80)
plt.subplot(1,3,1)
plt.plot(t, u_zz, color='r', lw=2)
plt.title('Green\'s function')
plt.xlabel('time [s]', size=16)
plt.ylabel('Displacement [cm]', size=14)
plt.xlim([0,nt*dt])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.subplot(1,3,2)
plt.plot(t, f, color='k', lw=2)
plt.title('Source time function')
plt.xlabel('time [s]', size=16)
plt.xlim([0,nt*dt])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.subplot(1,3,3)
plt.plot(t, u, color='b', lw=2)
plt.title('Displacement')
plt.xlabel('time [s]', size=16)
plt.xlim([0,nt*dt])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.grid(True)
plt.show()
Explanation: Convolution
Let $S(t)$ be a general source time function, then the displacent seismogram is given in terms of the Green's function $G$ via
\begin{equation}
u(\mathbf{x},t) = G(\mathbf{x},t; \mathbf{x}',t') \ast S(t)
\end{equation}
Exercise
Compute the convolution of the source time function 'ricker' with the Green's function of a Vertical displacement due to vertical loads. Plot the resulting displacement.
End of explanation |
15,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Write a function
Step1: n = 10
Step2: n=100
Step3: converting binary to decimal
Step4: testing more binary to decimal conversions | Python Code:
n = 1
print n.bit_length()
a = n.bit_length()
print bin(n)
print '%0*d' % (a, int(bin(n)[2:]))
print '{0:08b}'.format(n)
Explanation: Write a function:
def solution(N)
that, given a positive integer N, returns the length of its longest binary gap. The function should return 0 if N doesn't contain a binary gap.
For example, given N = 1041 the function should return 5, because N has binary representation 10000010001 and so its longest binary gap is of length 5.
Assume that:
N is an integer within the range [1..2,147,483,647].
Complexity:
expected worst-case time complexity is O(log(N));
expected worst-case space complexity is O(1).
Analyzing number of bits
End of explanation
n = 10
print n.bit_length()
a = n.bit_length()
print bin(n)
print '%0*d' % (a, int(bin(n)[2:]))
print '{0:08b}'.format(n)
Explanation: n = 10
End of explanation
n = 10
print n.bit_length()
a = n.bit_length()
print bin(n)
print '%0*d' % (a, int(bin(n)[2:]))
print '{0:08b}'.format(n)
Explanation: n=100
End of explanation
n = 157
count_one = 0
count_gap = 0
binarygap = 0
for count in xrange(n.bit_length()):
print count
if n % 2:
count_one +=1
n = n /2
elif count_one == 1:
count_gap += 1
print "count gap: ", count_gap
n = n / 2
if count_one > 1:
if binarygap < count_gap:
binarygap = count_gap
print "binary gap", binarygap
count_one = 1
count_gap = 0
print binarygap
157 /2
78/2
Explanation: converting binary to decimal
End of explanation
n = 37
count_one = 0
count_gap = 0
binarygap = 0
for count in xrange(n.bit_length()):
print count
print "n", n
if n % 2:
count_one +=1
n = n /2
elif count_one == 1:
count_gap += 1
print "count gap: ", count_gap
n = n / 2
if count_one > 1:
if binarygap < count_gap:
binarygap = count_gap
print "binary gap", binarygap
count_one = 1
count_gap = 0
print binarygap
n = 142
count_one = 0
count_gap = 0
binarygap = 0
for count in xrange(n.bit_length()):
print count
print "n", n
if n % 2:
count_one +=1
n = n /2
elif count_one == 1:
count_gap += 1
print "count gap: ", count_gap
n = n / 2
else:
n = n/2
if count_one > 1:
if binarygap < count_gap:
binarygap = count_gap
print "binary gap", binarygap
count_one = 1
count_gap = 0
print binarygap
142 / 2
71/2
35/2
n = 488
count_one = 0
count_gap = 0
binarygap = 0
for count in xrange(n.bit_length()):
print count
print "n", n
if n % 2:
count_one +=1
n = n /2
elif count_one == 1:
count_gap += 1
print "count gap: ", count_gap
n = n / 2
else:
n = n/2
if count_one > 1:
if binarygap < count_gap:
binarygap = count_gap
print "binary gap", binarygap
count_one = 1
count_gap = 0
print binarygap
n = 181
count_one = 0
count_gap = 0
binarygap = 0
for count in xrange(n.bit_length()):
print count
print "n", n
if n % 2:
count_one +=1
n = n /2
elif count_one == 1:
count_gap += 1
print "count gap: ", count_gap
n = n / 2
else:
n = n/2
if count_one > 1:
if binarygap < count_gap:
binarygap = count_gap
print "binary gap", binarygap
count_one = 1
count_gap = 0
print binarygap
def solution(N):
count_one = 0
count_gap = 0
binarygap = 0
for count in xrange(N.bit_length()):
if N % 2:
count_one +=1
N = N /2
elif count_one == 1:
count_gap += 1
N = N / 2
else:
N = N/2
if count_one > 1:
if binarygap < count_gap:
binarygap = count_gap
count_one = 1
count_gap = 0
return binarygap
print solution(181)
Explanation: testing more binary to decimal conversions
End of explanation |
15,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Bour Equation </h1>
Bour equation (a.k.a sine-Gordon) takes the canonical form
\begin{equation}
u_{xt}-\frac{1}{\rho^2}\sin(u)=0,
\end{equation}
where $-1/\rho^2$ equals the Gaussian curvature $\kappa$.
The Crank-Nicolson scheme for this equation takes the form
\begin{equation}
u^{n+1}{i+1} - u^{n+1}{i-1}=u^{n}{i+1} - u^{n}{i-1} + \frac{2hk}{\rho^2}\sin(u^n_i),
\end{equation}
with $i=1,\ldots,m$ and $n=1,\ldots,N$, provided $m$ and $N$, grid points and time steps.
<h4> Essential Libraries </h4>
Step1: <h4> Basic Definitions</h4>
Step5: <h4> Initial Value </h4>
Step6: <h4> Algorithm </h4>
Step7: <h4> Plots </h4> | Python Code:
# ----------------------------------------/
%matplotlib inline
# ----------------------------------------/
import math
import numpy as np
import matplotlib.pyplot as plt
import scipy.sparse.linalg as la
from pylab import *
from scipy import *
from ipywidgets import *
from scipy.sparse import spdiags
from numpy import asmatrix as MX
Explanation: <h1> Bour Equation </h1>
Bour equation (a.k.a sine-Gordon) takes the canonical form
\begin{equation}
u_{xt}-\frac{1}{\rho^2}\sin(u)=0,
\end{equation}
where $-1/\rho^2$ equals the Gaussian curvature $\kappa$.
The Crank-Nicolson scheme for this equation takes the form
\begin{equation}
u^{n+1}{i+1} - u^{n+1}{i-1}=u^{n}{i+1} - u^{n}{i-1} + \frac{2hk}{\rho^2}\sin(u^n_i),
\end{equation}
with $i=1,\ldots,m$ and $n=1,\ldots,N$, provided $m$ and $N$, grid points and time steps.
<h4> Essential Libraries </h4>
End of explanation
# space, time domain
a, b, t = -50, 50, 20
# grid points
m = 256
# curvature
r = 1.5
# spatial domain
x = np.linspace(a, b, m)
# mesh width
h = (b - a)/(1.0 + m)
# time steps
k = 0.001 * h
n = int(t / k)
# vectors solution with ghost boundaries
u = np.zeros(m + 2)
v = np.zeros((m + 2, n))
# triangular structure
o = np.ones(m)
# coefficients matrix
A = spdiags( [-1*o, 0*o, 1*o], [-1, 0, 1], m, m).toarray()
Explanation: <h4> Basic Definitions</h4>
End of explanation
def f(x, alpha, beta, a, b):
initial guess or
initial value condition
kink
4*np.arctan(np.exp(beta/r * x + alpha))
breather
4*np.arctan( (alpha*np.cos(b*x)) / (beta*np.cosh(a*x)) )
return 4*np.arctan( (alpha*np.cos(b*x)) / (beta*np.cosh(a*x)) )
# --------------------------------------------------/
def g(u):
RHS vector
return u[2:] - u[:-2] - (2*h*k/r**2)*np.sin(u[1:-1])
# vectors solution with ghost boundaries
u = np.zeros(m + 4)
v = np.zeros((m + 4, n))
f(x, alpha, beta, a, b)
NUMERICAL TESTS/EXPERIMENTS
alpha beta a b
-----------------------
10.0 1.05 0.85 0.125
10.0 1.05 0.75 0.010
8.0 0.85 0.75 0.010
1.0 0.50 0.85 0.200
u[2:-2] = f(x - 5, 1, 0.5, 0.85, 0.2)
Explanation: <h4> Initial Value </h4>
End of explanation
for j in range(n):
v[:,j] = u
u[1:-1] = dot(MX(A).I, g(u))
Explanation: <h4> Algorithm </h4>
End of explanation
def evolution(step):
l = 2e-1
plt.figure(figsize=(10,5))
plt.plot(x, v[1:-1,step], lw=2, alpha=0.75, color='deeppink', linestyle=':')
plt.grid(color='lightslategray', alpha=0.90)
plt.xlim(x.min() - l, x.max() + l)
plt.ylim(v.min() - l, v.max() + l)
# --------------------------------------------------/
# interactive plot
step = widgets.IntSlider(min=0, max=n-1, description='step')
interact(evolution, step=step)
Explanation: <h4> Plots </h4>
End of explanation |
15,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Huntsman Telephoto Array specifications
Introduction
The Huntsman Telephoto Array is an astronomical imaging system consisting of 1-10 imaging units attached to a telescope mount. The concept is closely based on the Dragonfly Telephoto Array, pictured below.
Imaging units
Each imaging unit comprises a Canon EF 400mm f/2.8L IS II USM camera lens, an SBIG STF-8300M ccd camera and an adaptor from Birger Engineering. The prototype imaging unit ('the Huntsman Eye') is shown below. The tripod adaptor bracket is bolted to the main lens body, this bracket could be removed and the bolt holes used for direct attachment to a support structure.
Telescope mount
The Huntsman Telephoto Array will use a Software Bisque Paramount ME II telescope mount. A Solidworks eDrawing is available as part of the mount documentation set.
Support structure
The Huntsman Telephoto Array support structure must enable the imaging units to be assembled into an array and attached to the telescope mount. The Dragonfly Telephoto Array has adopted a modular solution based on tubular structures around each lens, as can be seen in the photo below.
Enclosure
The Huntsman Telephoto Array is expected to be housed in an Astro Dome 4500, a 4.5 metre diameter telescope dome with a 1.2 metre wide aperture. A 21 year old example at Mt Kent Observatory is shown below.
Science requirements
Spatial sampling
Derived from the specifications of the chosen hardware.
Step1: Each imaging unit shall deliver an on-sky spatial sampling of $2.8\pm 0.1'' /$ pixel
Field of view
Derived from the specifications of the chosen hardware.
Step2: Each imaging unit shall deliver an instantaneous field of view of $2.6 \pm 0.1 \times 1.9 \pm 0.1$ degrees
Exposure time
Individual exposure times of 5-30 minutes are anticipated (5-10 minutes for broadband observations, 30 minutes for narrowband).
Step3: The system shall meet all requirements with exposure times of up to 30 minutes
Number of imaging units
The maximum number of imaging units per telescope mount is really determined by the mount payload mass limit and the aperture size of the enclosure. The Dragonfly Telephoto Array are currently operating with 10 imaging units on a single mount, the Huntsman Telephoto Array should be capable of at least matching this.
Step4: The system shall support up to at least 10 imaging units per telescope mount
Imaging unit alignment
Given the large field of view tight coalignment of individual imaging units is not required, or even particularly desirable.
Step5: All imaging units should point in the same direction to within a tolerance of 5 arcminutes radius on sky (TBC)
All data will be resampled prior to combination so some relative rotation between imaging units is acceptable.
Step6: All imaging units shall have the camera y axis aligned with the North-South axis to within a tolerance of $\pm$2.5 degrees (TBC)
Image quality
Abraham & van Dokkum (2014) report that imaging units of the design proposed for the Huntsman Telephoto Array are capable of producing a point spread function (PSF) with full width at half maximum (FWHM) of $\sim1.5''$, as measured by (undersampled) 3rd order polynomial fitting by SExtractor. When image sensor tilts (PSF degradation $<0.4''$) and imperfect telescope tracking are taken into account average FWHM of $< 2''$ were still achieved across the entire field of view. The Huntsman Telephoto Array should at least match this.
Step7: The system shall deliver a PSF with average FWHM $< 2''$ over the full field of view, as measured using a 3rd order polynomial fit performed wth the SExtrator software
Filters
For the primary science project we anticipate using SDSS-type g & r bandpass filters, typically with half of the imaging units equipped with one and half with the other though there may be targets for which we would want to use a different mix of filters. During bright of Moon it will not be possible to make useful observations for the primary science project and so during these times we may use narrowband filters, e.g. H-$\alpha$. To do this it must be possible to change filters between nights but it is not necessary that this be a motorised/automated process.
Each imaging unit shall be equipped with an optical bandpass filter
It must be possible to change filters between nights
The set of filters shall contain at least one SDSS-type filter of either g or r band for each imaging unit
Sky coverage
The system should allow the observation of targets at any position on the sky that corresponds to a reasonable airmass, i.e. $<2$.
Step8: The system shall satisfy all functional requirements (e.g. image quality, alignment) while observing any sky position with a zenith distance less than 60 degrees. The system is not required to meet functional requirements if observing a sky position with a zenith distance of greater than 60 degrees
Mechanical requirements
Support structure(s)
The mechanical support structure(s) shall allow the number of imaging units specified in the science requirements to be attached to the telescope mount
Step9: Imaging unit interface
The support structure(s) shall attach to the imaging units via the Canon EF 400mm f/2.8L IS II USM camera lens tripod mount bolt hole pattern and/or clamping of the camera lens body
Telescope mount interface
The support structure(s) shall attach to the telescope mount via the standard interface plate, the Paramount ME II Versa-Plate (drawing here)
Alignment
The support structure(s) shall ensure that the imaging units are aligned to within the tolerances specified in the science requirements
Step10: Flexure
The support structure(s) must be rigid enough so that flexure will not prevent the system from achieving the image quality specification from the science requirements. This requires the pointing of all imaging units to remain constant relative to either the telescope mount axes (if not autoguiding) or the autoguider pointing (if using autoguiding) to within a set tolerance for the duration of any individual exposure.
The tolerance can be calculated from the delivered image quality specification and expected imaging unit image quality.
Step11: A given exposure time corresponds to an angle of rotation about the telescope mount hour angle axis.
Step12: The support structure(s) shall ensure that the pointing of all imaging units shall remain fixed relative to the telescope mount axes to within 0.27 arcseconds rms while the hour angle axis rotates through any 7.5 degree angle, for any position of the declination axis, within the sky coverage requirement's zenith distance range
Step13: Mass
The telescope mount is rated for a maximum payload (not including counterweights) of 109 kg, therefore the total mass of imaging units plus support structure(s) should not exceed this value. The mass of the lens is 4.1 kg (source here), the mass of the CCD camera is 0.8 kg (source here) and the mass of the adaptor is estimated to be no more than 0.2 kg. | Python Code:
import math
from astropy import units as u
pixel_pitch = 5.4 * u.micron / u.pixel # STF-8300M pixel pitch
focal_length = 400 * u.millimeter # Canon EF 400 mm f/2.8L IS II USM focal length
resolution = (3326, 2504) * u.pixel # STF-8300M resolution in pixels, (x, y)
sampling = (pixel_pitch / focal_length).to(u.radian/u.pixel, equivalencies = u.equivalencies.dimensionless_angles())
sampling.to(u.arcsec/u.pixel)
Explanation: Huntsman Telephoto Array specifications
Introduction
The Huntsman Telephoto Array is an astronomical imaging system consisting of 1-10 imaging units attached to a telescope mount. The concept is closely based on the Dragonfly Telephoto Array, pictured below.
Imaging units
Each imaging unit comprises a Canon EF 400mm f/2.8L IS II USM camera lens, an SBIG STF-8300M ccd camera and an adaptor from Birger Engineering. The prototype imaging unit ('the Huntsman Eye') is shown below. The tripod adaptor bracket is bolted to the main lens body, this bracket could be removed and the bolt holes used for direct attachment to a support structure.
Telescope mount
The Huntsman Telephoto Array will use a Software Bisque Paramount ME II telescope mount. A Solidworks eDrawing is available as part of the mount documentation set.
Support structure
The Huntsman Telephoto Array support structure must enable the imaging units to be assembled into an array and attached to the telescope mount. The Dragonfly Telephoto Array has adopted a modular solution based on tubular structures around each lens, as can be seen in the photo below.
Enclosure
The Huntsman Telephoto Array is expected to be housed in an Astro Dome 4500, a 4.5 metre diameter telescope dome with a 1.2 metre wide aperture. A 21 year old example at Mt Kent Observatory is shown below.
Science requirements
Spatial sampling
Derived from the specifications of the chosen hardware.
End of explanation
fov = resolution * sampling
fov.to(u.degree)
Explanation: Each imaging unit shall deliver an on-sky spatial sampling of $2.8\pm 0.1'' /$ pixel
Field of view
Derived from the specifications of the chosen hardware.
End of explanation
exposure_times = ((5, 10, 30) * u.minute)
exposure_times
Explanation: Each imaging unit shall deliver an instantaneous field of view of $2.6 \pm 0.1 \times 1.9 \pm 0.1$ degrees
Exposure time
Individual exposure times of 5-30 minutes are anticipated (5-10 minutes for broadband observations, 30 minutes for narrowband).
End of explanation
n_units = (1, 4, 10)
n_units
Explanation: The system shall meet all requirements with exposure times of up to 30 minutes
Number of imaging units
The maximum number of imaging units per telescope mount is really determined by the mount payload mass limit and the aperture size of the enclosure. The Dragonfly Telephoto Array are currently operating with 10 imaging units on a single mount, the Huntsman Telephoto Array should be capable of at least matching this.
End of explanation
coalignment_tolerance = 5 * u.arcminute
coalignment_tolerance
Explanation: The system shall support up to at least 10 imaging units per telescope mount
Imaging unit alignment
Given the large field of view tight coalignment of individual imaging units is not required, or even particularly desirable.
End of explanation
north_alignment_tolerance = 2.5 * u.degree
north_alignment_tolerance
Explanation: All imaging units should point in the same direction to within a tolerance of 5 arcminutes radius on sky (TBC)
All data will be resampled prior to combination so some relative rotation between imaging units is acceptable.
End of explanation
central_fwhm = 1.5 * u.arcsecond
tilt_fwhm_degradation = 0.4 * u.arcsecond
max_fwhm = 2 * u.arcsecond
max_fwhm
Explanation: All imaging units shall have the camera y axis aligned with the North-South axis to within a tolerance of $\pm$2.5 degrees (TBC)
Image quality
Abraham & van Dokkum (2014) report that imaging units of the design proposed for the Huntsman Telephoto Array are capable of producing a point spread function (PSF) with full width at half maximum (FWHM) of $\sim1.5''$, as measured by (undersampled) 3rd order polynomial fitting by SExtractor. When image sensor tilts (PSF degradation $<0.4''$) and imperfect telescope tracking are taken into account average FWHM of $< 2''$ were still achieved across the entire field of view. The Huntsman Telephoto Array should at least match this.
End of explanation
max_zenith_distance = 60 * u.degree
max_zenith_distance
Explanation: The system shall deliver a PSF with average FWHM $< 2''$ over the full field of view, as measured using a 3rd order polynomial fit performed wth the SExtrator software
Filters
For the primary science project we anticipate using SDSS-type g & r bandpass filters, typically with half of the imaging units equipped with one and half with the other though there may be targets for which we would want to use a different mix of filters. During bright of Moon it will not be possible to make useful observations for the primary science project and so during these times we may use narrowband filters, e.g. H-$\alpha$. To do this it must be possible to change filters between nights but it is not necessary that this be a motorised/automated process.
Each imaging unit shall be equipped with an optical bandpass filter
It must be possible to change filters between nights
The set of filters shall contain at least one SDSS-type filter of either g or r band for each imaging unit
Sky coverage
The system should allow the observation of targets at any position on the sky that corresponds to a reasonable airmass, i.e. $<2$.
End of explanation
n_units
Explanation: The system shall satisfy all functional requirements (e.g. image quality, alignment) while observing any sky position with a zenith distance less than 60 degrees. The system is not required to meet functional requirements if observing a sky position with a zenith distance of greater than 60 degrees
Mechanical requirements
Support structure(s)
The mechanical support structure(s) shall allow the number of imaging units specified in the science requirements to be attached to the telescope mount
End of explanation
coalignment_tolerance
north_alignment_tolerance
Explanation: Imaging unit interface
The support structure(s) shall attach to the imaging units via the Canon EF 400mm f/2.8L IS II USM camera lens tripod mount bolt hole pattern and/or clamping of the camera lens body
Telescope mount interface
The support structure(s) shall attach to the telescope mount via the standard interface plate, the Paramount ME II Versa-Plate (drawing here)
Alignment
The support structure(s) shall ensure that the imaging units are aligned to within the tolerances specified in the science requirements
End of explanation
fwhm_to_rms = (2 * (2 * math.log(2))**0.5)**-1
max_flexure_rms = fwhm_to_rms * (max_fwhm**2 - (central_fwhm + tilt_fwhm_degradation)**2)**0.5
max_flexure_rms
Explanation: Flexure
The support structure(s) must be rigid enough so that flexure will not prevent the system from achieving the image quality specification from the science requirements. This requires the pointing of all imaging units to remain constant relative to either the telescope mount axes (if not autoguiding) or the autoguider pointing (if using autoguiding) to within a set tolerance for the duration of any individual exposure.
The tolerance can be calculated from the delivered image quality specification and expected imaging unit image quality.
End of explanation
ha_angles = (exposure_times.to(u.hour) * (u.hourangle / u.hour)).to(u.degree)
ha_angles
Explanation: A given exposure time corresponds to an angle of rotation about the telescope mount hour angle axis.
End of explanation
max_zenith_distance
Explanation: The support structure(s) shall ensure that the pointing of all imaging units shall remain fixed relative to the telescope mount axes to within 0.27 arcseconds rms while the hour angle axis rotates through any 7.5 degree angle, for any position of the declination axis, within the sky coverage requirement's zenith distance range
End of explanation
lens_mass = 4.1 * u.kilogram
camera_mass = 0.8 * u.kilogram
adaptor_mass = 0.2 * u.kilogram
imaging_unit_mass = lens_mass + camera_mass + adaptor_mass
max_payload_mass = 109 * u.kilogram
max_struture_mass = max_payload_mass - max(n_units) * imaging_unit_mass
max_struture_mass
Explanation: Mass
The telescope mount is rated for a maximum payload (not including counterweights) of 109 kg, therefore the total mass of imaging units plus support structure(s) should not exceed this value. The mass of the lens is 4.1 kg (source here), the mass of the CCD camera is 0.8 kg (source here) and the mass of the adaptor is estimated to be no more than 0.2 kg.
End of explanation |
15,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Weighted Generalized Linear Models
Step1: Weighted GLM
Step2: Load the data into a pandas dataframe.
Step3: The dependent (endogenous) variable is affairs
Step4: In the following we will work mostly with Poisson. While using decimal affairs works, we convert them to integers to have a count distribution.
Step5: Condensing and Aggregating observations
We have 6366 observations in our original dataset. When we consider only some selected variables, then we have fewer unique observations. In the following we combine observations in two ways, first we combine observations that have values for all variables identical, and secondly we combine observations that have the same explanatory variables.
Dataset with unique observations
We use pandas's groupby to combine identical observations and create a new variable freq that count how many observation have the values in the corresponding row.
Step6: Dataset with unique explanatory variables (exog)
For the next dataset we combine observations that have the same values of the explanatory variables. However, because the response variable can differ among combined observations, we compute the mean and the sum of the response variable for all combined observations.
We use again pandas groupby to combine observations and to create the new variables. We also flatten the MultiIndex into a simple index.
Step7: After combining observations with have a dataframe dc with 467 unique observations, and a dataframe df_a with 130 observations with unique values of the explanatory variables.
Step8: Analysis
In the following, we compare the GLM-Poisson results of the original data with models of the combined observations where the multiplicity or aggregation is given by weights or exposure.
original data
Step9: condensed data (unique observations with frequencies)
Combining identical observations and using frequency weights to take into account the multiplicity of observations produces exactly the same results. Some results attribute will differ when we want to have information about the observation and not about the aggregate of all identical observations. For example, residuals do not take freq_weights into account.
Step10: condensed using var_weights instead of freq_weights
Next, we compare var_weights to freq_weights. It is a common practice to incorporate var_weights when the endogenous variable reflects averages and not identical observations.
I don't see a theoretical reason why it produces the same results (in general).
This produces the same results but df_resid differs the freq_weights example because var_weights do not change the number of effective observations.
Step11: Dispersion computed from the results is incorrect because of wrong df_resid.
It is correct if we use the original df_resid.
Step12: aggregated or averaged data (unique values of explanatory variables)
For these cases we combine observations that have the same values of the explanatory variables. The corresponding response variable is either a sum or an average.
using exposure
If our dependent variable is the sum of the responses of all combined observations, then under the Poisson assumption the distribution remains the same but we have varying exposure given by the number of individuals that are represented by one aggregated observation.
The parameter estimates and covariance of parameters are the same with the original data, but log-likelihood, deviance and Pearson chi-squared differ
Step13: using var_weights
We can also use the mean of all combined values of the dependent variable. In this case the variance will be related to the inverse of the total exposure reflected by one combined observation.
Step14: Comparison
We saw in the summary prints above that params and cov_params with associated Wald inference agree across versions. We summarize this in the following comparing individual results attributes across versions.
Parameter estimates params, standard errors of the parameters bse and pvalues of the parameters for the tests that the parameters are zeros all agree. However, the likelihood and goodness-of-fit statistics, llf, deviance and pearson_chi2 only partially agree. Specifically, the aggregated version do not agree with the results using the original data.
Warning
Step15: Likelihood Ratio type tests
We saw above that likelihood and related statistics do not agree between the aggregated and original, individual data. We illustrate in the following that likelihood ratio test and difference in deviance aggree across versions, however Pearson chi-squared does not.
As before
Step16: aggregated data
Step17: Investigating Pearson chi-square statistic
First, we do some sanity checks that there are no basic bugs in the computation of pearson_chi2 and resid_pearson.
Step18: One possible reason for the incorrect sign is that we are subtracting quadratic terms that are divided by different denominators. In some related cases, the recommendation in the literature is to use a common denominator. We can compare pearson chi-squared statistic using the same variance assumption in the full and reduced model.
In this case we obtain the same pearson chi2 scaled difference between reduced and full model across all versions. (Issue #3616 is intended to track this further.)
Step19: Remainder
The remainder of the notebook just contains some additional checks and can be ignored. | Python Code:
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.api as sm
Explanation: Weighted Generalized Linear Models
End of explanation
print(sm.datasets.fair.NOTE)
Explanation: Weighted GLM: Poisson response data
Load data
In this example, we'll use the affair dataset using a handful of exogenous variables to predict the extra-marital affair rate.
Weights will be generated to show that freq_weights are equivalent to repeating records of data. On the other hand, var_weights is equivalent to aggregating data.
End of explanation
data = sm.datasets.fair.load_pandas().data
Explanation: Load the data into a pandas dataframe.
End of explanation
data.describe()
data[:3]
Explanation: The dependent (endogenous) variable is affairs
End of explanation
data["affairs"] = np.ceil(data["affairs"])
data[:3]
(data["affairs"] == 0).mean()
np.bincount(data["affairs"].astype(int))
Explanation: In the following we will work mostly with Poisson. While using decimal affairs works, we convert them to integers to have a count distribution.
End of explanation
data2 = data.copy()
data2['const'] = 1
dc = data2['affairs rate_marriage age yrs_married const'.split()].groupby('affairs rate_marriage age yrs_married'.split()).count()
dc.reset_index(inplace=True)
dc.rename(columns={'const': 'freq'}, inplace=True)
print(dc.shape)
dc.head()
Explanation: Condensing and Aggregating observations
We have 6366 observations in our original dataset. When we consider only some selected variables, then we have fewer unique observations. In the following we combine observations in two ways, first we combine observations that have values for all variables identical, and secondly we combine observations that have the same explanatory variables.
Dataset with unique observations
We use pandas's groupby to combine identical observations and create a new variable freq that count how many observation have the values in the corresponding row.
End of explanation
gr = data['affairs rate_marriage age yrs_married'.split()].groupby('rate_marriage age yrs_married'.split())
df_a = gr.agg(['mean', 'sum','count'])
def merge_tuple(tpl):
if isinstance(tpl, tuple) and len(tpl) > 1:
return "_".join(map(str, tpl))
else:
return tpl
df_a.columns = df_a.columns.map(merge_tuple)
df_a.reset_index(inplace=True)
print(df_a.shape)
df_a.head()
Explanation: Dataset with unique explanatory variables (exog)
For the next dataset we combine observations that have the same values of the explanatory variables. However, because the response variable can differ among combined observations, we compute the mean and the sum of the response variable for all combined observations.
We use again pandas groupby to combine observations and to create the new variables. We also flatten the MultiIndex into a simple index.
End of explanation
print('number of rows: \noriginal, with unique observations, with unique exog')
data.shape[0], dc.shape[0], df_a.shape[0]
Explanation: After combining observations with have a dataframe dc with 467 unique observations, and a dataframe df_a with 130 observations with unique values of the explanatory variables.
End of explanation
glm = smf.glm('affairs ~ rate_marriage + age + yrs_married',
data=data, family=sm.families.Poisson())
res_o = glm.fit()
print(res_o.summary())
res_o.pearson_chi2 / res_o.df_resid
Explanation: Analysis
In the following, we compare the GLM-Poisson results of the original data with models of the combined observations where the multiplicity or aggregation is given by weights or exposure.
original data
End of explanation
glm = smf.glm('affairs ~ rate_marriage + age + yrs_married',
data=dc, family=sm.families.Poisson(), freq_weights=np.asarray(dc['freq']))
res_f = glm.fit()
print(res_f.summary())
res_f.pearson_chi2 / res_f.df_resid
Explanation: condensed data (unique observations with frequencies)
Combining identical observations and using frequency weights to take into account the multiplicity of observations produces exactly the same results. Some results attribute will differ when we want to have information about the observation and not about the aggregate of all identical observations. For example, residuals do not take freq_weights into account.
End of explanation
glm = smf.glm('affairs ~ rate_marriage + age + yrs_married',
data=dc, family=sm.families.Poisson(), var_weights=np.asarray(dc['freq']))
res_fv = glm.fit()
print(res_fv.summary())
Explanation: condensed using var_weights instead of freq_weights
Next, we compare var_weights to freq_weights. It is a common practice to incorporate var_weights when the endogenous variable reflects averages and not identical observations.
I don't see a theoretical reason why it produces the same results (in general).
This produces the same results but df_resid differs the freq_weights example because var_weights do not change the number of effective observations.
End of explanation
res_fv.pearson_chi2 / res_fv.df_resid, res_f.pearson_chi2 / res_f.df_resid
Explanation: Dispersion computed from the results is incorrect because of wrong df_resid.
It is correct if we use the original df_resid.
End of explanation
glm = smf.glm('affairs_sum ~ rate_marriage + age + yrs_married',
data=df_a, family=sm.families.Poisson(), exposure=np.asarray(df_a['affairs_count']))
res_e = glm.fit()
print(res_e.summary())
res_e.pearson_chi2 / res_e.df_resid
Explanation: aggregated or averaged data (unique values of explanatory variables)
For these cases we combine observations that have the same values of the explanatory variables. The corresponding response variable is either a sum or an average.
using exposure
If our dependent variable is the sum of the responses of all combined observations, then under the Poisson assumption the distribution remains the same but we have varying exposure given by the number of individuals that are represented by one aggregated observation.
The parameter estimates and covariance of parameters are the same with the original data, but log-likelihood, deviance and Pearson chi-squared differ
End of explanation
glm = smf.glm('affairs_mean ~ rate_marriage + age + yrs_married',
data=df_a, family=sm.families.Poisson(), var_weights=np.asarray(df_a['affairs_count']))
res_a = glm.fit()
print(res_a.summary())
Explanation: using var_weights
We can also use the mean of all combined values of the dependent variable. In this case the variance will be related to the inverse of the total exposure reflected by one combined observation.
End of explanation
results_all = [res_o, res_f, res_e, res_a]
names = 'res_o res_f res_e res_a'.split()
pd.concat([r.params for r in results_all], axis=1, keys=names)
pd.concat([r.bse for r in results_all], axis=1, keys=names)
pd.concat([r.pvalues for r in results_all], axis=1, keys=names)
pd.DataFrame(np.column_stack([[r.llf, r.deviance, r.pearson_chi2] for r in results_all]),
columns=names, index=['llf', 'deviance', 'pearson chi2'])
Explanation: Comparison
We saw in the summary prints above that params and cov_params with associated Wald inference agree across versions. We summarize this in the following comparing individual results attributes across versions.
Parameter estimates params, standard errors of the parameters bse and pvalues of the parameters for the tests that the parameters are zeros all agree. However, the likelihood and goodness-of-fit statistics, llf, deviance and pearson_chi2 only partially agree. Specifically, the aggregated version do not agree with the results using the original data.
Warning: The behavior of llf, deviance and pearson_chi2 might still change in future versions.
Both the sum and average of the response variable for unique values of the explanatory variables have a proper likelihood interpretation. However, this interpretation is not reflected in these three statistics. Computationally this might be due to missing adjustments when aggregated data is used. However, theoretically we can think in these cases, especially for var_weights of the misspecified case when likelihood analysis is inappropriate and the results should be interpreted as quasi-likelihood estimates. There is an ambiguity in the definition of var_weights because they can be used for averages with correctly specified likelihood as well as for variance adjustments in the quasi-likelihood case. We are currently not trying to match the likelihood specification. However, in the next section we show that likelihood ratio type tests still produce the same result for all aggregation versions when we assume that the underlying model is correctly specified.
End of explanation
glm = smf.glm('affairs ~ rate_marriage + yrs_married',
data=data, family=sm.families.Poisson())
res_o2 = glm.fit()
#print(res_f2.summary())
res_o2.pearson_chi2 - res_o.pearson_chi2, res_o2.deviance - res_o.deviance, res_o2.llf - res_o.llf
glm = smf.glm('affairs ~ rate_marriage + yrs_married',
data=dc, family=sm.families.Poisson(), freq_weights=np.asarray(dc['freq']))
res_f2 = glm.fit()
#print(res_f2.summary())
res_f2.pearson_chi2 - res_f.pearson_chi2, res_f2.deviance - res_f.deviance, res_f2.llf - res_f.llf
Explanation: Likelihood Ratio type tests
We saw above that likelihood and related statistics do not agree between the aggregated and original, individual data. We illustrate in the following that likelihood ratio test and difference in deviance aggree across versions, however Pearson chi-squared does not.
As before: This is not sufficiently clear yet and could change.
As a test case we drop the age variable and compute the likelihood ratio type statistics as difference between reduced or constrained and full or unconstraint model.
original observations and frequency weights
End of explanation
glm = smf.glm('affairs_sum ~ rate_marriage + yrs_married',
data=df_a, family=sm.families.Poisson(), exposure=np.asarray(df_a['affairs_count']))
res_e2 = glm.fit()
res_e2.pearson_chi2 - res_e.pearson_chi2, res_e2.deviance - res_e.deviance, res_e2.llf - res_e.llf
glm = smf.glm('affairs_mean ~ rate_marriage + yrs_married',
data=df_a, family=sm.families.Poisson(), var_weights=np.asarray(df_a['affairs_count']))
res_a2 = glm.fit()
res_a2.pearson_chi2 - res_a.pearson_chi2, res_a2.deviance - res_a.deviance, res_a2.llf - res_a.llf
Explanation: aggregated data: exposure and var_weights
Note: LR test agrees with original observations, pearson_chi2 differs and has the wrong sign.
End of explanation
res_e2.pearson_chi2, res_e.pearson_chi2, (res_e2.resid_pearson**2).sum(), (res_e.resid_pearson**2).sum()
res_e._results.resid_response.mean(), res_e.model.family.variance(res_e.mu)[:5], res_e.mu[:5]
(res_e._results.resid_response**2 / res_e.model.family.variance(res_e.mu)).sum()
res_e2._results.resid_response.mean(), res_e2.model.family.variance(res_e2.mu)[:5], res_e2.mu[:5]
(res_e2._results.resid_response**2 / res_e2.model.family.variance(res_e2.mu)).sum()
(res_e2._results.resid_response**2).sum(), (res_e._results.resid_response**2).sum()
Explanation: Investigating Pearson chi-square statistic
First, we do some sanity checks that there are no basic bugs in the computation of pearson_chi2 and resid_pearson.
End of explanation
((res_e2._results.resid_response**2 - res_e._results.resid_response**2) / res_e2.model.family.variance(res_e2.mu)).sum()
((res_a2._results.resid_response**2 - res_a._results.resid_response**2) / res_a2.model.family.variance(res_a2.mu)
* res_a2.model.var_weights).sum()
((res_f2._results.resid_response**2 - res_f._results.resid_response**2) / res_f2.model.family.variance(res_f2.mu)
* res_f2.model.freq_weights).sum()
((res_o2._results.resid_response**2 - res_o._results.resid_response**2) / res_o2.model.family.variance(res_o2.mu)).sum()
Explanation: One possible reason for the incorrect sign is that we are subtracting quadratic terms that are divided by different denominators. In some related cases, the recommendation in the literature is to use a common denominator. We can compare pearson chi-squared statistic using the same variance assumption in the full and reduced model.
In this case we obtain the same pearson chi2 scaled difference between reduced and full model across all versions. (Issue #3616 is intended to track this further.)
End of explanation
np.exp(res_e2.model.exposure)[:5], np.asarray(df_a['affairs_count'])[:5]
res_e2.resid_pearson.sum() - res_e.resid_pearson.sum()
res_e2.mu[:5]
res_a2.pearson_chi2, res_a.pearson_chi2, res_a2.resid_pearson.sum(), res_a.resid_pearson.sum()
((res_a2._results.resid_response**2) / res_a2.model.family.variance(res_a2.mu) * res_a2.model.var_weights).sum()
((res_a._results.resid_response**2) / res_a.model.family.variance(res_a.mu) * res_a.model.var_weights).sum()
((res_a._results.resid_response**2) / res_a.model.family.variance(res_a2.mu) * res_a.model.var_weights).sum()
res_e.model.endog[:5], res_e2.model.endog[:5]
res_a.model.endog[:5], res_a2.model.endog[:5]
res_a2.model.endog[:5] * np.exp(res_e2.model.exposure)[:5]
res_a2.model.endog[:5] * res_a2.model.var_weights[:5]
from scipy import stats
stats.chi2.sf(27.19530754604785, 1), stats.chi2.sf(29.083798806764687, 1)
res_o.pvalues
print(res_e2.summary())
print(res_e.summary())
print(res_f2.summary())
print(res_f.summary())
Explanation: Remainder
The remainder of the notebook just contains some additional checks and can be ignored.
End of explanation |
15,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python 知之深浅
Python 中的对象分为两种:可变对象(mutable)和不可变对象(immutable)。不可变对象包括int,float,long,str,tuple等,可变对象包括list,set,dict等。在 Python 中,赋值(assignment, =)的过程仅仅是:
创建一个(某个值的)对象;
将变量名指向(引用)这个对象。
这就像 C 语言中指针的概念,只不过更灵活地是 Python 中的变量随时可以指向其它对象(不分类型),其它变量也可以指向这一对象。如果这一对象是可变的,那么对其中一个引用变量的改变会影响其它变量:
Step1: 如果你不是刻意想要这样做(实际也很少会要这样操作),那么就可能导致一些意想不到的错误(尤其是在传递参数给函数的时候)。为了解决这一麻烦,最简单的方法就是不直接变量指向现有的对象,而是生成一份新的 copy 赋值给新的变量,有很多种语法可以实现:
Step2: deep vs shallow
上面给出的这些 copy 的例子比较简单,都没有嵌套的情况出现,如果这里的可变对象中还包含其它可变对象,结果会怎样呢:
Step3: 这些 copy 的方法称为浅拷贝(shallow copy),它相比直接赋值更进了一步生成了新的对象,但是对于嵌套的对象仍然采用了赋值的方法来创建;如果要再进一步,则需要深拷贝(deep copy),由标准库 copy 提供: | Python Code:
lst = [1, 2, 3]
s = lst
s.pop()
print(lst)
d = {'a': 0}
e = d
e['b'] = 1
print(d)
Explanation: Python 知之深浅
Python 中的对象分为两种:可变对象(mutable)和不可变对象(immutable)。不可变对象包括int,float,long,str,tuple等,可变对象包括list,set,dict等。在 Python 中,赋值(assignment, =)的过程仅仅是:
创建一个(某个值的)对象;
将变量名指向(引用)这个对象。
这就像 C 语言中指针的概念,只不过更灵活地是 Python 中的变量随时可以指向其它对象(不分类型),其它变量也可以指向这一对象。如果这一对象是可变的,那么对其中一个引用变量的改变会影响其它变量:
End of explanation
lst = [1,2,3]
llst = [lst,
lst[:],
lst.copy(),
[*lst]] # invalid in 2.7
for i, v in enumerate(llst):
v.append("#{}".format(i))
print(lst)
d = {"a": 0}
dd = [d,
d.copy(),
{**d}] # invalid in 2.7
for i, v in enumerate(dd):
v['dd'] = "#{}".format(i)
print(d)
Explanation: 如果你不是刻意想要这样做(实际也很少会要这样操作),那么就可能导致一些意想不到的错误(尤其是在传递参数给函数的时候)。为了解决这一麻烦,最简单的方法就是不直接变量指向现有的对象,而是生成一份新的 copy 赋值给新的变量,有很多种语法可以实现:
End of explanation
lst = [0, 1, [2, 3]]
llst = [lst,
lst[:],
lst.copy(),
[*lst]]
for i, v in enumerate(llst):
v[2].append("#{}".format(i))
print(lst)
d = {"a": {"b": [0]}}
dd = [d,
d.copy(),
{**d}]
for i, v in enumerate(dd):
v['a']['b'].append("#{}".format(i))
print(d)
Explanation: deep vs shallow
上面给出的这些 copy 的例子比较简单,都没有嵌套的情况出现,如果这里的可变对象中还包含其它可变对象,结果会怎样呢:
End of explanation
from copy import deepcopy
lst = [0, 1, [2, 3]]
lst2 = deepcopy(lst)
lst2[2].append(4)
print(lst2)
print(lst)
d = {"a": {"b": [0]}}
d2 = deepcopy(d)
d2["a"]["b"].append(1)
print(d2)
print(d)
Explanation: 这些 copy 的方法称为浅拷贝(shallow copy),它相比直接赋值更进了一步生成了新的对象,但是对于嵌套的对象仍然采用了赋值的方法来创建;如果要再进一步,则需要深拷贝(deep copy),由标准库 copy 提供:
End of explanation |
15,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TF Lattice Canned Estimator
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 필수 패키지 가져오기
Step3: UCI Statlog(Heart) 데이터세트 다운로드하기
Step4: 이 가이드에서 훈련에 사용되는 기본값 설정하기
Step5: 특성 열
다른 TF estimator와 마찬가지로 데이터는 일반적으로 input_fn을 통해 estimator로 전달되어야 하며 FeatureColumns를 사용하여 구문 분석됩니다.
Step6: 준비된 TFL estimator는 특성 열의 유형을 사용하여 사용할 보정 레이어 유형을 결정합니다. 숫자 특성 열에는 tfl.layers.PWLCalibration를, 범주형 특성 열에는 tfl.layers.CategoricalCalibration 레이어가 사용됩니다.
범주형 특성 열은 임베딩 특성 열로 래핑되지 않고 estimator에 직접 공급됩니다.
input_fn 만들기
다른 estimator와 마찬가지로 input_fn을 사용하여 훈련 및 평가를 위해 모델에 데이터를 공급할 수 있습니다. TFL estimator는 특성의 분위수를 자동으로 계산하고 이를 PWL 보정 레이어의 입력 키포인트로 사용할 수 있습니다. 이를 위해서는 훈련 input_fn과 유사하지만 단일 epoch 또는 데이터의 하위 샘플이 있는 feature_analysis_input_fn를 전달해야 합니다.
Step7: 특성 구성
특성 보정 및 특성별 구성은 tfl.configs.FeatureConfig를 사용하여 설정됩니다. 특성 구성에는 단조 제약 조건, 특성별 정규화(tfl.configs.RegularizerConfig 참조) 및 격자 모델에 대한 격자 크기가 포함됩니다.
입력 특성에 대한 구성이 정의되지 않은 경우 tfl.config.FeatureConfig의 기본 구성이 사용됩니다.
Step8: 보정된 선형 모델
준비된 TFL estimator를 구성하려면 tfl.configs에서 모델 구성을 갖추세요. 보정된 선형 모델은 tfl.configs.CalibratedLinearConfig를 사용하여 구성됩니다. 입력 특성에 부분 선형 및 범주형 보정을 적용한 다음 선형 조합 및 선택적 출력 부분 선형 보정을 적용합니다. 출력 보정을 사용하거나 출력 경계가 지정된 경우 선형 레이어는 보정된 입력에 가중치 평균을 적용합니다.
이 예제에서는 처음 5개 특성에 대해 보정된 선형 모델을 만듭니다. tfl.visualization을 사용하여 보정 플롯으로 모델 그래프를 플롯합니다.
Step9: 보정된 격자 모델
보정된 격자 모델은 tfl.configs.CalibratedLatticeConfig를 사용하여 구성됩니다. 보정된 격자 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 및 선택적 출력 구간별 선형 보정을 적용합니다.
이 예제에서는 처음 5개의 특성에 대해 보정된 격자 모델을 만듭니다.
Step10: 보정된 격자 앙상블
특성 수가 많으면 앙상블 모델을 사용할 수 있습니다. 이 모델은 특성의 하위 집합에 대해 여러 개의 작은 격자를 만들고, 하나의 거대한 격자를 만드는 대신 출력을 평균화합니다. 앙상블 격자 모델은 tfl.configs.CalibratedLatticeEnsembleConfig를 사용하여 구성됩니다. 보정된 격자 앙상블 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 앙상블과 선택적 출력 구간별 선형 보정을 적용합니다.
무작위 격자 앙상블
다음 모델 구성은 각 격자에 대해 무작위의 특성 하위 집합을 사용합니다.
Step11: RTL 레이어 무작위 격자 앙상블
다음 모델 구성은 각 격자에 대해 무작위의 특성 하위 집합을 사용하는 tfl.layers.RTL 레이어를 사용합니다. tfl.layers.RTL은 단조 제약 조건만 지원하며 모든 특성에 대해 동일한 격자 크기를 가져야 하고 특성별 정규화가 없어야 합니다. tfl.layers.RTL 레이어를 사용하면 별도의 tfl.layers.Lattice 인스턴스를 사용하는 것보다 훨씬 더 큰 앙상블로 확장할 수 있습니다.
Step12: Crystals 격자 앙상블
TFL은 또한 Crystals라고 하는 휴리스틱 특성 배열 알고리즘을 제공합니다. Crystals 알고리즘은 먼저 쌍별 특성 상호 작용을 예측하는 사전 적합 모델을 훈련합니다. 그런 다음 비 선형 상호 작용이 더 많은 특성이 동일한 격자에 있도록 최종 앙상블을 정렬합니다.
Crystals 모델의 경우 위에서 설명한 대로 사전 적합 모델을 훈련하는 데 사용되는 prefitting_input_fn도 제공해야 합니다. 사전 적합 모델은 완전하게 훈련될 필요가 없기에 몇 번의 epoch면 충분합니다.
Step13: 그런 다음 모델 구성에서 lattice='crystals' 를 설정하여 Crystal 모델을 만들 수 있습니다.
Step14: tfl.visualization 모듈을 사용하여 더 자세한 정보로 특성 calibrator를 플롯할 수 있습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice
Explanation: TF Lattice Canned Estimator
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/canned_estimators"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/canned_estimators.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/canned_estimators.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/canned_estimators.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
준비된 estimator는 일반적인 사용 사례를 위해 TFL 모델을 훈련하는 빠르고 쉬운 방법입니다. 이 가이드에서는 TFL canned estimator를 만드는 데 필요한 단계를 설명합니다.
설정
TF Lattice 패키지 설치하기
End of explanation
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
logging.disable(sys.maxsize)
Explanation: 필수 패키지 가져오기
End of explanation
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
target = df.pop('target')
train_size = int(len(df) * 0.8)
train_x = df[:train_size]
train_y = target[:train_size]
test_x = df[train_size:]
test_y = target[train_size:]
df.head()
Explanation: UCI Statlog(Heart) 데이터세트 다운로드하기
End of explanation
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
Explanation: 이 가이드에서 훈련에 사용되는 기본값 설정하기
End of explanation
# Feature columns.
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
feature_columns = [
fc.numeric_column('age', default_value=-1),
fc.categorical_column_with_vocabulary_list('sex', [0, 1]),
fc.numeric_column('cp'),
fc.numeric_column('trestbps', default_value=-1),
fc.numeric_column('chol'),
fc.categorical_column_with_vocabulary_list('fbs', [0, 1]),
fc.categorical_column_with_vocabulary_list('restecg', [0, 1, 2]),
fc.numeric_column('thalach'),
fc.categorical_column_with_vocabulary_list('exang', [0, 1]),
fc.numeric_column('oldpeak'),
fc.categorical_column_with_vocabulary_list('slope', [0, 1, 2]),
fc.numeric_column('ca'),
fc.categorical_column_with_vocabulary_list(
'thal', ['normal', 'fixed', 'reversible']),
]
Explanation: 특성 열
다른 TF estimator와 마찬가지로 데이터는 일반적으로 input_fn을 통해 estimator로 전달되어야 하며 FeatureColumns를 사용하여 구문 분석됩니다.
End of explanation
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=train_x,
y=train_y,
shuffle=False,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
num_threads=1)
# feature_analysis_input_fn is used to collect statistics about the input.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=train_x,
y=train_y,
shuffle=False,
batch_size=BATCH_SIZE,
# Note that we only need one pass over the data.
num_epochs=1,
num_threads=1)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=test_x,
y=test_y,
shuffle=False,
batch_size=BATCH_SIZE,
num_epochs=1,
num_threads=1)
# Serving input fn is used to create saved models.
serving_input_fn = (
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=fc.make_parse_example_spec(feature_columns)))
Explanation: 준비된 TFL estimator는 특성 열의 유형을 사용하여 사용할 보정 레이어 유형을 결정합니다. 숫자 특성 열에는 tfl.layers.PWLCalibration를, 범주형 특성 열에는 tfl.layers.CategoricalCalibration 레이어가 사용됩니다.
범주형 특성 열은 임베딩 특성 열로 래핑되지 않고 estimator에 직접 공급됩니다.
input_fn 만들기
다른 estimator와 마찬가지로 input_fn을 사용하여 훈련 및 평가를 위해 모델에 데이터를 공급할 수 있습니다. TFL estimator는 특성의 분위수를 자동으로 계산하고 이를 PWL 보정 레이어의 입력 키포인트로 사용할 수 있습니다. 이를 위해서는 훈련 input_fn과 유사하지만 단일 epoch 또는 데이터의 하위 샘플이 있는 feature_analysis_input_fn를 전달해야 합니다.
End of explanation
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
# By default, input keypoints of pwl are quantiles of the feature.
pwl_calibration_num_keypoints=5,
monotonicity='increasing',
pwl_calibration_clip_max=100,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='cp',
pwl_calibration_num_keypoints=4,
# Keypoints can be uniformly spaced.
pwl_calibration_input_keypoints='uniform',
monotonicity='increasing',
),
tfl.configs.FeatureConfig(
name='chol',
# Explicit input keypoint initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
monotonicity='increasing',
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
),
tfl.configs.FeatureConfig(
name='trestbps',
pwl_calibration_num_keypoints=5,
monotonicity='decreasing',
),
tfl.configs.FeatureConfig(
name='thalach',
pwl_calibration_num_keypoints=5,
monotonicity='decreasing',
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
),
tfl.configs.FeatureConfig(
name='oldpeak',
pwl_calibration_num_keypoints=5,
monotonicity='increasing',
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
),
tfl.configs.FeatureConfig(
name='ca',
pwl_calibration_num_keypoints=4,
monotonicity='increasing',
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
),
]
Explanation: 특성 구성
특성 보정 및 특성별 구성은 tfl.configs.FeatureConfig를 사용하여 설정됩니다. 특성 구성에는 단조 제약 조건, 특성별 정규화(tfl.configs.RegularizerConfig 참조) 및 격자 모델에 대한 격자 크기가 포함됩니다.
입력 특성에 대한 구성이 정의되지 않은 경우 tfl.config.FeatureConfig의 기본 구성이 사용됩니다.
End of explanation
# Model config defines the model structure for the estimator.
model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs,
use_bias=True,
output_calibration=True,
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CannedClassifier is constructed from the given model config.
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns[:5],
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42))
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('Calibrated linear test AUC: {}'.format(results['auc']))
saved_model_path = estimator.export_saved_model(estimator.model_dir,
serving_input_fn)
model_graph = tfl.estimators.get_model_graph(saved_model_path)
tfl.visualization.draw_model_graph(model_graph)
Explanation: 보정된 선형 모델
준비된 TFL estimator를 구성하려면 tfl.configs에서 모델 구성을 갖추세요. 보정된 선형 모델은 tfl.configs.CalibratedLinearConfig를 사용하여 구성됩니다. 입력 특성에 부분 선형 및 범주형 보정을 적용한 다음 선형 조합 및 선택적 출력 부분 선형 보정을 적용합니다. 출력 보정을 사용하거나 출력 경계가 지정된 경우 선형 레이어는 보정된 입력에 가중치 평균을 적용합니다.
이 예제에서는 처음 5개 특성에 대해 보정된 선형 모델을 만듭니다. tfl.visualization을 사용하여 보정 플롯으로 모델 그래프를 플롯합니다.
End of explanation
# This is calibrated lattice model: Inputs are calibrated, then combined
# non-linearly using a lattice layer.
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs,
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-4),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
])
# A CannedClassifier is constructed from the given model config.
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns[:5],
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42))
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('Calibrated lattice test AUC: {}'.format(results['auc']))
saved_model_path = estimator.export_saved_model(estimator.model_dir,
serving_input_fn)
model_graph = tfl.estimators.get_model_graph(saved_model_path)
tfl.visualization.draw_model_graph(model_graph)
Explanation: 보정된 격자 모델
보정된 격자 모델은 tfl.configs.CalibratedLatticeConfig를 사용하여 구성됩니다. 보정된 격자 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 및 선택적 출력 구간별 선형 보정을 적용합니다.
이 예제에서는 처음 5개의 특성에 대해 보정된 격자 모델을 만듭니다.
End of explanation
# This is random lattice ensemble model with separate calibration:
# model output is the average output of separately calibrated lattices.
model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
num_lattices=5,
lattice_rank=3)
# A CannedClassifier is constructed from the given model config.
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42))
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('Random ensemble test AUC: {}'.format(results['auc']))
saved_model_path = estimator.export_saved_model(estimator.model_dir,
serving_input_fn)
model_graph = tfl.estimators.get_model_graph(saved_model_path)
tfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)
Explanation: 보정된 격자 앙상블
특성 수가 많으면 앙상블 모델을 사용할 수 있습니다. 이 모델은 특성의 하위 집합에 대해 여러 개의 작은 격자를 만들고, 하나의 거대한 격자를 만드는 대신 출력을 평균화합니다. 앙상블 격자 모델은 tfl.configs.CalibratedLatticeEnsembleConfig를 사용하여 구성됩니다. 보정된 격자 앙상블 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 앙상블과 선택적 출력 구간별 선형 보정을 적용합니다.
무작위 격자 앙상블
다음 모델 구성은 각 격자에 대해 무작위의 특성 하위 집합을 사용합니다.
End of explanation
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is RTL layer ensemble model with separate calibration:
# model output is the average output of separately calibrated lattices.
model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
lattices='rtl_layer',
feature_configs=rtl_layer_feature_configs,
num_lattices=5,
lattice_rank=3)
# A CannedClassifier is constructed from the given model config.
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42))
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('Random ensemble test AUC: {}'.format(results['auc']))
saved_model_path = estimator.export_saved_model(estimator.model_dir,
serving_input_fn)
model_graph = tfl.estimators.get_model_graph(saved_model_path)
tfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)
Explanation: RTL 레이어 무작위 격자 앙상블
다음 모델 구성은 각 격자에 대해 무작위의 특성 하위 집합을 사용하는 tfl.layers.RTL 레이어를 사용합니다. tfl.layers.RTL은 단조 제약 조건만 지원하며 모든 특성에 대해 동일한 격자 크기를 가져야 하고 특성별 정규화가 없어야 합니다. tfl.layers.RTL 레이어를 사용하면 별도의 tfl.layers.Lattice 인스턴스를 사용하는 것보다 훨씬 더 큰 앙상블로 확장할 수 있습니다.
End of explanation
prefitting_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=train_x,
y=train_y,
shuffle=False,
batch_size=BATCH_SIZE,
num_epochs=PREFITTING_NUM_EPOCHS,
num_threads=1)
Explanation: Crystals 격자 앙상블
TFL은 또한 Crystals라고 하는 휴리스틱 특성 배열 알고리즘을 제공합니다. Crystals 알고리즘은 먼저 쌍별 특성 상호 작용을 예측하는 사전 적합 모델을 훈련합니다. 그런 다음 비 선형 상호 작용이 더 많은 특성이 동일한 격자에 있도록 최종 앙상블을 정렬합니다.
Crystals 모델의 경우 위에서 설명한 대로 사전 적합 모델을 훈련하는 데 사용되는 prefitting_input_fn도 제공해야 합니다. 사전 적합 모델은 완전하게 훈련될 필요가 없기에 몇 번의 epoch면 충분합니다.
End of explanation
# This is Crystals ensemble model with separate calibration: model output is
# the average output of separately calibrated lattices.
model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3)
# A CannedClassifier is constructed from the given model config.
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
# prefitting_input_fn is required to train the prefitting model.
prefitting_input_fn=prefitting_input_fn,
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),
prefitting_optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42))
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('Crystals ensemble test AUC: {}'.format(results['auc']))
saved_model_path = estimator.export_saved_model(estimator.model_dir,
serving_input_fn)
model_graph = tfl.estimators.get_model_graph(saved_model_path)
tfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)
Explanation: 그런 다음 모델 구성에서 lattice='crystals' 를 설정하여 Crystal 모델을 만들 수 있습니다.
End of explanation
_ = tfl.visualization.plot_feature_calibrator(model_graph, "age")
_ = tfl.visualization.plot_feature_calibrator(model_graph, "restecg")
Explanation: tfl.visualization 모듈을 사용하여 더 자세한 정보로 특성 calibrator를 플롯할 수 있습니다.
End of explanation |
15,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Combinatorial Explosion
"During the past century, science has developed a limited capability to design materials, but we are still too dependent on serendipity" - Eberhart and Clougherty, Looking for design in materials design (2004)
This practical explores how materials design can be approached by using the simplest of rules in order to narrow down the combinations to those that might be considered legitimate. It will demonstrate the scale of the problem, even after some chemical rules are applied.
TASKS
Section 1
Step1: <img src = "Images/Combinations_vs_Permutations.png">
2. Counting combinations
Step2: ii. Element combination counting
This first procedure simply counts how many binary combinations are possible for a given set of elements. This is a numerical (combinations) problem, as we are not considering element properties in any way for the time being.
TASK 1
Step3: iii. Ion combination counting
We now consider each known oxidation state of an element (so strictly speaking we are not dealing with 'ions'). The procedure incorporates a library of known oxidation states for each element and is this time already set up to search for ternary combinations. The code prints out the combination of elements including their oxidation states. There is also a timer so that you can see how long it takes to run the program.
TASK 1
Step4: All we seem to have done is make matters worse!
We are introducing many more species by further splitting each element in our search-space into separate ions, one for each allowed oxidation state. When we get to max_atomic_number > 20, we are including the transition metals and their many oxidation states.
iv. Charge neutrality
The previous step is necessary to incorporate our filter that viable compounds must be charge neutral overall. Scrolling through the output from above, it is easy to see that the vast majority of the combinations are not charge neutral overall. We can discard these combinations to start narrowing our search down to more 'sensible' (or at least not totally unreasonable) ones. In this cell, we will use the neutral_ratios function in smact to do this.
TASK 1
Step5: This drastically reduces the number of combinations we get out and we can even begin to see some compounds that we recognise and know exist.
v. Electronegativity
The last step is to incorporate the key chemical property of electronegativity, i.e. the propensity of an element to attract electron density to itself in a bond. This is a logical step as inspection of the output from above reveals that some combinations feature a species in a higher (more positive) oxidation state which is more elecronegative than other species present.
With this in mind, we now incorporate another filter which checks that the species with higher oxidation states have lower electronegativities. The library of values used is of the widely accepted electronegativity scale as developed by Linus Pauling. The scale is based on the dissociation energies of heteronuclear diatomic molecules and their corresponding homonuclear diatomic molecules | Python Code:
from math import factorial as factorial
grid_points = 1000.0
atoms = 30.0
elements = 50.0
##########
# A. Show that assigning each of the 30 atoms as one of 50 elements is ~ 9e50 (permutations)
element_assignment = 0
print(f'Number of possible element assignments is: {element_assignment}')
# B. Show that the number of possible arrangements of 30 atoms on a grid of 10x10x10 is ~2e57 (combinations)
atom_arrangements = 0
print(f'Number of atom arrangements is: {atom_arrangements}')
# C. Finally, show that the total number of potential "materials" is ~ 2e108
total_materials = 0
print(f'Total number of "materials" is: {total_materials}')
Explanation: The Combinatorial Explosion
"During the past century, science has developed a limited capability to design materials, but we are still too dependent on serendipity" - Eberhart and Clougherty, Looking for design in materials design (2004)
This practical explores how materials design can be approached by using the simplest of rules in order to narrow down the combinations to those that might be considered legitimate. It will demonstrate the scale of the problem, even after some chemical rules are applied.
TASKS
Section 1: 1 task
Section 2 i: 2 tasks
Section 2 ii: 2 tasks
Section 2 iii: 2 tasks
Section 2 iv: 3 tasks
Section 3 & 4: information only
NOTES ON USING THE NOTEBOOK
This notebook is divided into "cells" which either contain Markdown (text, equations and images) or Python code
A cell can be "run" by selecting it and either
pressing the Run button in the toolbar above (triangle/arrow symbol)
Using Cell > Run in the menu above
Holding the Ctrl key and pressing Enter
Running Markdown cells just displays them nicely (like this text!) Running Python code cells runs the code and displays any output below.
When you run a cell and it appears to not be doing anything, if there is no number in the square brackets and instead you see In [*] it is still running!
If the output produces a lot of lines, you can minimise the output box by clicking on the white space to the left of it.
You can clear the output of a cell or all cells by going to Cell > Current output/All output > Clear.
1. Back to basics: Forget your chemistry
(From the blog of Anubhav Jain: www.hackingmaterials.com)
You have the first 50 elements of the periodic table
You also have a 10 x 10 x 10 grid
You are allowed to arrange 30 of the elements at a time in some combination in the grid to make a 'compound'
How many different arrangements (different compounds) could you make?
<img src = "Images/atomsinbox.png">
The answer is about $10^{108}$, over a googol of compounds!
TASK: Use the cell below to arrive at the conclusion above. Hints for the formula required are below the cell.
End of explanation
# Imports the SMACT toolkit for later on #
import smact
# Gets element data from file and puts into a list #
with open('Counting/element_data.txt','r') as f:
data = f.readlines()
list_of_elements = []
# Specify the range of elements to include #
### EDIT BELOW ###
max_atomic_number = 10
##################
# Populates a list with the elements we are concerned with #
for line in data:
if not line.startswith('#'):
# Grab first three items from table row
symbol, name, Z = line.split()[:3]
if int(Z) > 0 and int(Z) < max_atomic_number + 1:
list_of_elements.append(symbol)
print(f'--- Considering the {len(list_of_elements)} elements '
f'from {list_of_elements[0]} to {list_of_elements[-1]} ---')
Explanation: <img src = "Images/Combinations_vs_Permutations.png">
2. Counting combinations: Remember your chemistry
We will use well-known elemental properties along with the criterion that compounds must not have an overall charge in order to sequentially apply different levels of screening and count the possible combinations:
i. Setting up the search space - Defining which elements we want to include
ii. Element combination counting - Considering combinations of elements and ignore oxidation states
iii. Ion combination counting - Considering combinations of elements in their allowed oxidation states
iv. Charge neutrality - Discarding any combinations that would not make a charge neutral compound
v. Electronegativity - Discarding any combinations which exhibit a cation which is more electronegative than an anion
i. Setting up and choosing the search-space
The code below imports the element data that we need in order to do our counting. The main variable in the cell below for this practical is the max_atomic_number which dictates how many elements to consider.
For example, when max_atomic_number = 10 the elements from H to Ne are considered in the search.
TASK 1: Change the variable max_atomic_number so that it includes elements from H to Ar
TASK 2: Get the code to print out the actual list of elements that will be considered
End of explanation
# Counts up possibilities and prints the output #
element_count = 0
for i, ele_a in enumerate(list_of_elements):
for j, ele_b in enumerate(list_of_elements[i+1:]):
element_count += 1
print(f'{ele_a} {ele_b}')
# Prints the total number of combinations found
print(f'Number of combinations = {element_count}')
Explanation: ii. Element combination counting
This first procedure simply counts how many binary combinations are possible for a given set of elements. This is a numerical (combinations) problem, as we are not considering element properties in any way for the time being.
TASK 1: Increase the number of elements to consider (max_atomic_number in the cell above) to see how this affects the number of combinations
TASK 2: If you can, add another for statement (e.g. for k, ele_c...) to make the cell count up ternary combinations. It is advisable to change the number of elements to include back to 10 first! Hint: The next exercise is set up for ternary counting so you could come back and do this after looking at that.
End of explanation
# Sets up the timer to see how long the program takes to run #
import time
start_time = time.time()
ion_count = 0
for i, ele_a in enumerate(list_of_elements):
for ox_a in smact.Element(ele_a).oxidation_states:
for j, ele_b in enumerate(list_of_elements[i+1:]):
for ox_b in smact.Element(ele_b).oxidation_states:
for k, ele_c in enumerate(list_of_elements[i+j+2:]):
for ox_c in smact.Element(ele_c).oxidation_states:
ion_count += 1
print(f'{ele_a} {ox_a} \t {ele_b} {ox_b} \t {ele_c} {ox_c}')
# Prints the total number of combinations found and the time taken to run.
print(f'Number of combinations = {ion_count}')
print(f'--- {time.time() - start_time} seconds to run ---')
Explanation: iii. Ion combination counting
We now consider each known oxidation state of an element (so strictly speaking we are not dealing with 'ions'). The procedure incorporates a library of known oxidation states for each element and is this time already set up to search for ternary combinations. The code prints out the combination of elements including their oxidation states. There is also a timer so that you can see how long it takes to run the program.
TASK 1: Reset the search space to ~10 elements, read through (feel free to ask if you don't understand any parts!) and run the code below.
TASK 2: change max_atomic_number again in the cell above and see how this affects the number of combinations. Hint: It is advisable to increase the search-space gradually and see how long the calculation takes. Big numbers mean you could be waiting a while for the calculation to run....
End of explanation
import time
from smact import neutral_ratios
start_time = time.time()
charge_neutral_count = 0
for i, ele_a in enumerate(list_of_elements):
for ox_a in smact.Element(ele_a).oxidation_states:
for j, ele_b in enumerate(list_of_elements[i+1:]):
for ox_b in smact.Element(ele_b).oxidation_states:
for k, ele_c in enumerate(list_of_elements[i+j+2:]):
for ox_c in smact.Element(ele_c).oxidation_states:
# Checks if the combination is charge neutral before printing it out! #
cn_e, cn_r = neutral_ratios([ox_a, ox_b, ox_c], threshold=1)
if cn_e:
charge_neutral_count += 1
print(f'{ele_a} \t {ele_b} \t {ele_c}')
print(f'Number of combinations = {charge_neutral_count}')
print(f'--- {time.time() - start_time} seconds to run ---')
Explanation: All we seem to have done is make matters worse!
We are introducing many more species by further splitting each element in our search-space into separate ions, one for each allowed oxidation state. When we get to max_atomic_number > 20, we are including the transition metals and their many oxidation states.
iv. Charge neutrality
The previous step is necessary to incorporate our filter that viable compounds must be charge neutral overall. Scrolling through the output from above, it is easy to see that the vast majority of the combinations are not charge neutral overall. We can discard these combinations to start narrowing our search down to more 'sensible' (or at least not totally unreasonable) ones. In this cell, we will use the neutral_ratios function in smact to do this.
TASK 1: Reset the search space to ~10 elements, read through (feel free to ask if you don't understand any parts!) and run the code below.
TASK 2: Edit the code so that it also prints out the oxidation state next to each element
TASK 3: Increase the number of elements to consider again (max_atomic_number in the cell above) and compare the output of i. and ii. with that of the below cell
End of explanation
import time
from smact.screening import pauling_test
start_time = time.time()
pauling_count = 0
for i, ele_a in enumerate(list_of_elements):
paul_a = smact.Element(ele_a).pauling_eneg
for ox_a in smact.Element(ele_a).oxidation_states:
for j, ele_b in enumerate(list_of_elements[i+1:]):
paul_b = smact.Element(ele_b).pauling_eneg
for ox_b in smact.Element(ele_b).oxidation_states:
for k, ele_c in enumerate(list_of_elements[i+j+2:]):
paul_c = smact.Element(ele_c).pauling_eneg
for ox_c in smact.Element(ele_c).oxidation_states:
# Puts elements, oxidation states and electronegativites into lists for convenience #
elements = [ele_a, ele_b, ele_c]
oxidation_states = [ox_a, ox_b, ox_c]
pauling_electro = [paul_a, paul_b, paul_c]
# Checks if the electronegativity makes sense and if the combination is charge neutral #
electroneg_makes_sense = pauling_test(oxidation_states, pauling_electro, elements)
cn_e, cn_r = smact.neutral_ratios([ox_a, ox_b, ox_c], threshold=1)
if cn_e:
if electroneg_makes_sense:
pauling_count += 1
print(f'{ele_a}{ox_a} \t {ele_b}{ox_b} \t {ele_c}{ox_c}')
print(f'Number of combinations = {pauling_count}')
print(f'--- {time.time() - start_time} seconds to run ---')
Explanation: This drastically reduces the number of combinations we get out and we can even begin to see some compounds that we recognise and know exist.
v. Electronegativity
The last step is to incorporate the key chemical property of electronegativity, i.e. the propensity of an element to attract electron density to itself in a bond. This is a logical step as inspection of the output from above reveals that some combinations feature a species in a higher (more positive) oxidation state which is more elecronegative than other species present.
With this in mind, we now incorporate another filter which checks that the species with higher oxidation states have lower electronegativities. The library of values used is of the widely accepted electronegativity scale as developed by Linus Pauling. The scale is based on the dissociation energies of heteronuclear diatomic molecules and their corresponding homonuclear diatomic molecules:
<img src = 'Images/pauling-equation.png'>
End of explanation |
15,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Poppy Web Service
Demarrage d'un web service poppy avec HTTPRobotServer
Step5: http
Step6: Lancement du serveur
Step7: Le script start_servers.py crée une instance de poppy, puis de lance un serveur WEB/REST (HTTPRobotServer) ainsi qu'un serveur RPC (RobotServer) en tache de fond dans des thread. | Python Code:
#imports and initilaize virutal poppy using vrep
from pypot.vrep import from_vrep
from poppy.creatures import PoppyHumanoid
robot = PoppyHumanoid(simulator='vrep')
#import and initialize physical poppy
from poppy.creatures import PoppyHumanoid
robot = PoppyHumanoid()
from pypot.server import HTTPRobotServer
server = HTTPRobotServer(robot,'127.0.0.1',8081)
Explanation: Poppy Web Service
Demarrage d'un web service poppy avec HTTPRobotServer
End of explanation
import json
import numpy
import bottle
import logging
from bottle import response
from pypot.server.server import AbstractServer
logger = logging.getLogger(__name__)
class MyJSONEncoder(json.JSONEncoder):
JSONEncoder which tries to call a json property before using the enconding default function.
def default(self, obj):
if isinstance(obj, numpy.ndarray):
return list(obj)
return json.JSONEncoder.default(self, obj)
class EnableCors(object):
Enable CORS (Cross-Origin Resource Sharing) headers
name = 'enable_cors'
api = 2
def __init__(self,origin="*"):
self.origin = origin
def apply(self, fn, context):
def _enable_cors(*args, **kwargs):
# set CORS headers
response.headers['Access-Control-Allow-Origin'] = self.origin
response.headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, OPTIONS'
response.headers['Access-Control-Allow-Headers'] = 'Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token'
if bottle.request.method != 'OPTIONS':
# actual request; reply with the actual response
return fn(*args, **kwargs)
return _enable_cors
class HTTPRobotServer(AbstractServer):
Bottle based HTTPServer used to remote access a robot.
Please refer to the REST API for an exhaustive list of the possible routes.
def __init__(self, robot, host, port, cross_domain_origin=None):
AbstractServer.__init__(self, robot, host, port)
self.app = bottle.Bottle()
jd = lambda s: json.dumps(s, cls=MyJSONEncoder)
self.app.install(bottle.JSONPlugin(json_dumps=jd))
if(cross_domain_origin):
self.app.install(EnableCors(cross_domain_origin))
rr = self.restfull_robot
# Motors route
@self.app.get('/motor/list.json')
@self.app.get('/motor/<alias>/list.json')
def get_motor_list(alias='motors'):
return {
alias: rr.get_motors_list(alias)
}
@self.app.get('/sensor/list.json')
def get_sensor_list():
return {
'sensors': rr.get_sensors_list()
}
@self.app.get('/motor/alias/list.json')
def get_motor_alias():
return {
'alias': rr.get_motors_alias()
}
@self.app.get('/motor/<motor_name>/register/list.json')
@self.app.get('/sensor/<motor_name>/register/list.json')
def get_motor_registers(motor_name):
return {
'registers': rr.get_motor_registers_list(motor_name)
}
@self.app.get('/motor/<motor_name>/register/<register_name>')
@self.app.get('/sensor/<motor_name>/register/<register_name>')
def get_register_value(motor_name, register_name):
return {
register_name: rr.get_motor_register_value(motor_name, register_name)
}
@self.app.post('/motor/<motor_name>/register/<register_name>/value.json')
@self.app.post('/sensor/<motor_name>/register/<register_name>/value.json')
def set_register_value(motor_name, register_name):
rr.set_motor_register_value(motor_name, register_name,
bottle.request.json)
return {}
# Sensors route
# Primitives route
@self.app.get('/primitive/list.json')
def get_primitives_list(self):
return {
'primitives': rr.get_primitives_list()
}
@self.app.get('/primitive/running/list.json')
def get_running_primitives_list(self):
return {
'running_primitives': rr.get_running_primitives_list()
}
@self.app.get('/primitive/<prim>/start.json')
def start_primitive(self, prim):
rr.start_primitive(prim)
@self.app.get('/primitive/<prim>/stop.json')
def stop_primitive(self, prim):
rr.stop_primitive(prim)
@self.app.get('/primitive/<prim>/pause.json')
def pause_primitive(self, prim):
rr.pause_primitive(prim)
@self.app.get('/primitive/<prim>/resume.json')
def resume_primitive(self, prim):
rr.resume_primitive(prim)
@self.app.get('/primitive/<prim>/property/list.json')
def get_primitive_properties_list(self, prim):
return {
'property': rr.get_primitive_properties_list(prim)
}
@self.app.get('/primitive/<prim>/property/<prop>')
def get_primitive_property(self, prim, prop):
res = rr.get_primitive_property(prim, prop)
return {
'{}.{}'.format(prim, prop): res
}
@self.app.post('/primitive/<prim>/property/<prop>/value.json')
def set_primitive_property(self, prim, prop):
rr.set_primitive_property(prim, prop,
bottle.request.json)
@self.app.get('/primitive/<prim>/method/list.json')
def get_primitive_methods_list(self, prim):
return {
'methods': rr.get_primitive_methods_list(self, prim)
}
@self.app.post('/primitive/<prim>/method/<meth>/args.json')
def call_primitive_method(self, prim, meth):
res = rr.call_primitive_method(prim, meth,
bottle.request.json)
return {
'{}:{}'.format(prim, meth): res
}
def run(self, quiet=False, server='tornado'):
Start the bottle server, run forever.
bottle.run(self.app,
host=self.host, port=self.port,
quiet=quiet,
server=server)
server = HTTPRobotServer(robot,'127.0.0.1',8082,cross_domain_origin='*')
Explanation: http://127.0.0.1:8081/motor/list.json
Version modifiée de HTTPRobotServer donnant la possibilité d'utiliser les headers CORS pour tapper sur le service web en ajax d'où l'on veu :
End of explanation
try:
server.run()
except RuntimeError as e:
print(e)
Explanation: Lancement du serveur :
Dans ipython notebook le lancement du serveur genere une erreur IOLoop is already running, probablement due au fait que tornadoest deja lancé puisqu'il fait tourner le notebook. Le serveur est tout de meme lancé pour le couper il suffit de redemarer le kenel. Cette erreur n'a pas lieu lorsque l'on est dans un script ou dans ipython en console.
End of explanation
robot.stand_position.start()
robot.compliant = True
Explanation: Le script start_servers.py crée une instance de poppy, puis de lance un serveur WEB/REST (HTTPRobotServer) ainsi qu'un serveur RPC (RobotServer) en tache de fond dans des thread.
End of explanation |
15,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmark NumPyro in large dataset
This notebook uses numpyro and replicates experiments in references [1] which evaluates the performance of NUTS on various frameworks. The benchmark is run with CUDA 10.1 on a NVIDIA RTX 2070.
Step1: We do preprocessing steps as in source code of reference [1]
Step2: Now, we construct the model
Step3: Benchmark HMC
Step4: In CPU, we get avg. time for each step | Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import time
import numpy as np
import jax.numpy as jnp
from jax import random
import numpyro
import numpyro.distributions as dist
from numpyro.examples.datasets import COVTYPE, load_dataset
from numpyro.infer import HMC, MCMC, NUTS
assert numpyro.__version__.startswith("0.9.2")
# NB: replace gpu by cpu to run this notebook in cpu
numpyro.set_platform("gpu")
Explanation: Benchmark NumPyro in large dataset
This notebook uses numpyro and replicates experiments in references [1] which evaluates the performance of NUTS on various frameworks. The benchmark is run with CUDA 10.1 on a NVIDIA RTX 2070.
End of explanation
_, fetch = load_dataset(COVTYPE, shuffle=False)
features, labels = fetch()
# normalize features and add intercept
features = (features - features.mean(0)) / features.std(0)
features = jnp.hstack([features, jnp.ones((features.shape[0], 1))])
# make binary feature
_, counts = np.unique(labels, return_counts=True)
specific_category = jnp.argmax(counts)
labels = labels == specific_category
N, dim = features.shape
print("Data shape:", features.shape)
print(
"Label distribution: {} has label 1, {} has label 0".format(
labels.sum(), N - labels.sum()
)
)
Explanation: We do preprocessing steps as in source code of reference [1]:
End of explanation
def model(data, labels):
coefs = numpyro.sample("coefs", dist.Normal(jnp.zeros(dim), jnp.ones(dim)))
logits = jnp.dot(data, coefs)
return numpyro.sample("obs", dist.Bernoulli(logits=logits), obs=labels)
Explanation: Now, we construct the model:
End of explanation
step_size = jnp.sqrt(0.5 / N)
kernel = HMC(
model,
step_size=step_size,
trajectory_length=(10 * step_size),
adapt_step_size=False,
)
mcmc = MCMC(kernel, num_warmup=500, num_samples=500, progress_bar=False)
mcmc.warmup(random.PRNGKey(2019), features, labels, extra_fields=("num_steps",))
mcmc.get_extra_fields()["num_steps"].sum().copy()
tic = time.time()
mcmc.run(random.PRNGKey(2020), features, labels, extra_fields=["num_steps"])
num_leapfrogs = mcmc.get_extra_fields()["num_steps"].sum().copy()
toc = time.time()
print("number of leapfrog steps:", num_leapfrogs)
print("avg. time for each step :", (toc - tic) / num_leapfrogs)
mcmc.print_summary()
Explanation: Benchmark HMC
End of explanation
mcmc = MCMC(NUTS(model), num_warmup=50, num_samples=50, progress_bar=False)
mcmc.warmup(random.PRNGKey(2019), features, labels, extra_fields=("num_steps",))
mcmc.get_extra_fields()["num_steps"].sum().copy()
tic = time.time()
mcmc.run(random.PRNGKey(2020), features, labels, extra_fields=["num_steps"])
num_leapfrogs = mcmc.get_extra_fields()["num_steps"].sum().copy()
toc = time.time()
print("number of leapfrog steps:", num_leapfrogs)
print("avg. time for each step :", (toc - tic) / num_leapfrogs)
mcmc.print_summary()
Explanation: In CPU, we get avg. time for each step : 0.02782863507270813.
Benchmark NUTS
End of explanation |
15,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Epochs data structure
Step1:
Step2: As we saw in the tut-events-vs-annotations tutorial, we can extract an
events array from
Step3: <div class="alert alert-info"><h4>Note</h4><p>We could also have loaded the events from file, using
Step4: You'll see from the output that
Step5: Notice that the Event IDs are in quotes; since we didn't provide an event
dictionary, the
Step6: This time let's pass preload=True and provide an event dictionary; our
provided dictionary will get stored as the event_id attribute and will
make referencing events and pooling across event types easier
Step7: Notice that the output now mentions "1 bad epoch dropped". In the tutorial
section tut-reject-epochs-section we saw how you can specify channel
amplitude criteria for rejecting epochs, but here we haven't specified any
such criteria. In this case, it turns out that the last event was too close
the end of the (cropped) raw file to accommodate our requested tmax of
0.7 seconds, so the final epoch was dropped because it was too short. Here
are the drop_log entries for the last 4 epochs (empty lists indicate
epochs that were not dropped)
Step8: <div class="alert alert-info"><h4>Note</h4><p>If you forget to provide the event dictionary to the
Step9: Notice that the individual epochs are sequentially numbered along the bottom
axis; the event ID associated with the epoch is marked on the top axis;
epochs are separated by vertical dashed lines; and a vertical solid green
line marks time=0 for each epoch (i.e., in this case, the stimulus onset
time for each trial). Epoch plots are interactive (similar to
Step10: We can also pool across conditions easily, thanks to how MNE-Python handles
the / character in epoch labels (using what is sometimes called
"tag-based indexing")
Step11: You can also pool conditions by passing multiple tags as a list. Note that
MNE-Python will not complain if you ask for tags not present in the object,
as long as it can find some match
Step12: However, if no match is found, an error is returned
Step13: Selecting epochs by index
Step14: Selecting, dropping, and reordering channels
You can use the
Step15: Changing channel name and type
You can change the name or type of a channel using
Step16: Selection in the time domain
To change the temporal extent of the
Step17: Cropping removed part of the baseline. When printing the
cropped
Step18: However, if you wanted to expand the time domain of an
Step19: Note that although time shifting respects the sampling frequency (the spacing
between samples), it does not enforce the assumption that there is a sample
occurring at exactly time=0.
Extracting data in other forms
The
Step20: Note that if your analysis requires repeatedly extracting single epochs from
an
Step21: See the tut-epochs-dataframe tutorial for many more examples of the
Step22: The MNE-Python naming convention for epochs files is that the file basename
(the part before the .fif or .fif.gz extension) should end with
-epo or _epo, and a warning will be issued if the filename you
provide does not adhere to that convention.
As a final note, be aware that the class of the epochs object is different
when epochs are loaded from disk rather than generated from a
Step23: In almost all cases this will not require changing anything about your code.
However, if you need to do type checking on epochs objects, you can test
against the base class that these classes are derived from
Step24: Iterating over Epochs
Iterating over an
Step25: If you want to iterate over | Python Code:
import os
import mne
Explanation: The Epochs data structure: discontinuous data
This tutorial covers the basics of creating and working with :term:epoched
<epochs> data. It introduces the :class:~mne.Epochs data structure in
detail, including how to load, query, subselect, export, and plot data from an
:class:~mne.Epochs object. For more information about visualizing
:class:~mne.Epochs objects, see tut-visualize-epochs. For info on
creating an :class:~mne.Epochs object from (possibly simulated) data in a
:class:NumPy array <numpy.ndarray>, see tut_creating_data_structures.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=60)
Explanation: :class:~mne.Epochs objects are a data structure for representing and
analyzing equal-duration chunks of the EEG/MEG signal. :class:~mne.Epochs
are most often used to represent data that is time-locked to repeated
experimental events (such as stimulus onsets or subject button presses), but
can also be used for storing sequential or overlapping frames of a continuous
signal (e.g., for analysis of resting-state activity; see
fixed-length-events). Inside an :class:~mne.Epochs object, the data
are stored in an :class:array <numpy.ndarray> of shape (n_epochs,
n_channels, n_times).
:class:~mne.Epochs objects have many similarities with :class:~mne.io.Raw
objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> through the
:meth:~mne.Epochs.get_data method or to a :class:Pandas DataFrame
<pandas.DataFrame> through the :meth:~mne.Epochs.to_data_frame method.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects support channel
selection by index or name, including :meth:~mne.Epochs.pick,
:meth:~mne.Epochs.pick_channels and :meth:~mne.Epochs.pick_types
methods.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Epochs.add_proj, :meth:~mne.Epochs.del_proj, and
:meth:~mne.Epochs.plot_projs_topomap methods.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects have
:meth:~mne.Epochs.copy, :meth:~mne.Epochs.crop,
:meth:~mne.Epochs.time_as_index, :meth:~mne.Epochs.filter, and
:meth:~mne.Epochs.resample methods.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects have
:attr:~mne.Epochs.times, :attr:~mne.Epochs.ch_names,
:attr:~mne.Epochs.proj, and :class:info <mne.Info> attributes.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects have built-in
plotting methods :meth:~mne.Epochs.plot, :meth:~mne.Epochs.plot_psd,
and :meth:~mne.Epochs.plot_psd_topomap.
Creating Epoched data from a Raw object
The example dataset we've been using thus far doesn't include pre-epoched
data, so in this section we'll load the continuous data and create epochs
based on the events recorded in the :class:~mne.io.Raw object's STIM
channels. As we often do in these tutorials, we'll :meth:~mne.io.Raw.crop
the :class:~mne.io.Raw data to save memory:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
Explanation: As we saw in the tut-events-vs-annotations tutorial, we can extract an
events array from :class:~mne.io.Raw objects using :func:mne.find_events:
End of explanation
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>We could also have loaded the events from file, using
:func:`mne.read_events`::
sample_data_events_file = os.path.join(sample_data_folder,
'MEG', 'sample',
'sample_audvis_raw-eve.fif')
events_from_file = mne.read_events(sample_data_events_file)
See `tut-section-events-io` for more details.</p></div>
The :class:~mne.io.Raw object and the events array are the bare minimum
needed to create an :class:~mne.Epochs object, which we create with the
:class:mne.Epochs class constructor. However, you will almost surely want
to change some of the other default parameters. Here we'll change tmin
and tmax (the time relative to each event at which to start and end each
epoch). Note also that the :class:~mne.Epochs constructor accepts
parameters reject and flat for rejecting individual epochs based on
signal amplitude. See the tut-reject-epochs-section section for
examples.
End of explanation
print(epochs)
Explanation: You'll see from the output that:
all 320 events were used to create epochs
baseline correction was automatically applied (by default, baseline is
defined as the time span from tmin to 0, but can be customized with
the baseline parameter)
no additional metadata was provided (see tut-epochs-metadata for
details)
the projection operators present in the :class:~mne.io.Raw file were
copied over to the :class:~mne.Epochs object
If we print the :class:~mne.Epochs object, we'll also see a note that the
epochs are not copied into memory by default, and a count of the number of
epochs created for each integer Event ID.
End of explanation
print(epochs.event_id)
Explanation: Notice that the Event IDs are in quotes; since we didn't provide an event
dictionary, the :class:mne.Epochs constructor created one automatically and
used the string representation of the integer Event IDs as the dictionary
keys. This is more clear when viewing the event_id attribute:
End of explanation
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'face': 5, 'buttonpress': 32}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
print(epochs.event_id)
del raw # we're done with raw, free up some memory
Explanation: This time let's pass preload=True and provide an event dictionary; our
provided dictionary will get stored as the event_id attribute and will
make referencing events and pooling across event types easier:
End of explanation
print(epochs.drop_log[-4:])
Explanation: Notice that the output now mentions "1 bad epoch dropped". In the tutorial
section tut-reject-epochs-section we saw how you can specify channel
amplitude criteria for rejecting epochs, but here we haven't specified any
such criteria. In this case, it turns out that the last event was too close
the end of the (cropped) raw file to accommodate our requested tmax of
0.7 seconds, so the final epoch was dropped because it was too short. Here
are the drop_log entries for the last 4 epochs (empty lists indicate
epochs that were not dropped):
End of explanation
epochs.plot(n_epochs=10)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>If you forget to provide the event dictionary to the :class:`~mne.Epochs`
constructor, you can add it later by assigning to the ``event_id``
attribute::
epochs.event_id = event_dict</p></div>
Basic visualization of Epochs objects
The :class:~mne.Epochs object can be visualized (and browsed interactively)
using its :meth:~mne.Epochs.plot method:
End of explanation
print(epochs['face'])
Explanation: Notice that the individual epochs are sequentially numbered along the bottom
axis; the event ID associated with the epoch is marked on the top axis;
epochs are separated by vertical dashed lines; and a vertical solid green
line marks time=0 for each epoch (i.e., in this case, the stimulus onset
time for each trial). Epoch plots are interactive (similar to
:meth:raw.plot() <mne.io.Raw.plot>) and have many of the same interactive
controls as :class:~mne.io.Raw plots. Horizontal and vertical scrollbars
allow browsing through epochs or channels (respectively), and pressing
:kbd:? when the plot is focused will show a help screen with all the
available controls. See tut-visualize-epochs for more details (as well
as other ways of visualizing epoched data).
Subselecting epochs
Now that we have our :class:~mne.Epochs object with our descriptive event
labels added, we can subselect epochs easily using square brackets. For
example, we can load all the "catch trials" where the stimulus was a face:
End of explanation
# pool across left + right
print(epochs['auditory'])
assert len(epochs['auditory']) == (len(epochs['auditory/left']) +
len(epochs['auditory/right']))
# pool across auditory + visual
print(epochs['left'])
assert len(epochs['left']) == (len(epochs['auditory/left']) +
len(epochs['visual/left']))
Explanation: We can also pool across conditions easily, thanks to how MNE-Python handles
the / character in epoch labels (using what is sometimes called
"tag-based indexing"):
End of explanation
print(epochs[['right', 'bottom']])
Explanation: You can also pool conditions by passing multiple tags as a list. Note that
MNE-Python will not complain if you ask for tags not present in the object,
as long as it can find some match: the below example is parsed as
(inclusive) 'right' or 'bottom', and you can see from the output
that it selects only auditory/right and visual/right.
End of explanation
try:
print(epochs[['top', 'bottom']])
except KeyError:
print('Tag-based selection with no matches raises a KeyError!')
Explanation: However, if no match is found, an error is returned:
End of explanation
print(epochs[:10]) # epochs 0-9
print(epochs[1:8:2]) # epochs 1, 3, 5, 7
print(epochs['buttonpress'][:4]) # first 4 "buttonpress" epochs
print(epochs['buttonpress'][[0, 1, 2, 3]]) # same as previous line
Explanation: Selecting epochs by index
:class:~mne.Epochs objects can also be indexed with integers, :term:slices
<slice>, or lists of integers. This method of selection ignores event
labels, so if you want the first 10 epochs of a particular type, you can
select the type first, then use integers or slices:
End of explanation
epochs_eeg = epochs.copy().pick_types(meg=False, eeg=True)
print(epochs_eeg.ch_names)
new_order = ['EEG 002', 'STI 014', 'EOG 061', 'MEG 2521']
epochs_subset = epochs.copy().reorder_channels(new_order)
print(epochs_subset.ch_names)
del epochs_eeg, epochs_subset
Explanation: Selecting, dropping, and reordering channels
You can use the :meth:~mne.Epochs.pick, :meth:~mne.Epochs.pick_channels,
:meth:~mne.Epochs.pick_types, and :meth:~mne.Epochs.drop_channels methods
to modify which channels are included in an :class:~mne.Epochs object. You
can also use :meth:~mne.Epochs.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Epochs.reorder_channels will be
dropped. Note that these channel selection methods modify the object
in-place (unlike the square-bracket indexing to select epochs seen above)
so in interactive/exploratory sessions you may want to create a
:meth:~mne.Epochs.copy first.
End of explanation
epochs.rename_channels({'EOG 061': 'BlinkChannel'})
epochs.set_channel_types({'EEG 060': 'ecg'})
print(list(zip(epochs.ch_names, epochs.get_channel_types()))[-4:])
# let's set them back to the correct values before moving on
epochs.rename_channels({'BlinkChannel': 'EOG 061'})
epochs.set_channel_types({'EEG 060': 'eeg'})
Explanation: Changing channel name and type
You can change the name or type of a channel using
:meth:~mne.Epochs.rename_channels or :meth:~mne.Epochs.set_channel_types.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
End of explanation
shorter_epochs = epochs.copy().crop(tmin=-0.1, tmax=0.1, include_tmax=True)
for name, obj in dict(Original=epochs, Cropped=shorter_epochs).items():
print('{} epochs has {} time samples'
.format(name, obj.get_data().shape[-1]))
Explanation: Selection in the time domain
To change the temporal extent of the :class:~mne.Epochs, you can use the
:meth:~mne.Epochs.crop method:
End of explanation
print(shorter_epochs)
Explanation: Cropping removed part of the baseline. When printing the
cropped :class:~mne.Epochs, MNE-Python will inform you about the time
period that was originally used to perform baseline correction by displaying
the string "baseline period cropped after baseline correction":
End of explanation
# shift times so that first sample of each epoch is at time zero
later_epochs = epochs.copy().shift_time(tshift=0., relative=False)
print(later_epochs.times[:3])
# shift times by a relative amount
later_epochs.shift_time(tshift=-7, relative=True)
print(later_epochs.times[:3])
del shorter_epochs, later_epochs
Explanation: However, if you wanted to expand the time domain of an :class:~mne.Epochs
object, you would need to go back to the :class:~mne.io.Raw data and
recreate the :class:~mne.Epochs with different values for tmin and/or
tmax.
It is also possible to change the "zero point" that defines the time values
in an :class:~mne.Epochs object, with the :meth:~mne.Epochs.shift_time
method. :meth:~mne.Epochs.shift_time allows shifting times relative to the
current values, or specifying a fixed time to set as the new time value of
the first sample (deriving the new time values of subsequent samples based on
the :class:~mne.Epochs object's sampling frequency).
End of explanation
eog_data = epochs.get_data(picks='EOG 061')
meg_data = epochs.get_data(picks=['mag', 'grad'])
channel_4_6_8 = epochs.get_data(picks=slice(4, 9, 2))
for name, arr in dict(EOG=eog_data, MEG=meg_data, Slice=channel_4_6_8).items():
print('{} contains {} channels'.format(name, arr.shape[1]))
Explanation: Note that although time shifting respects the sampling frequency (the spacing
between samples), it does not enforce the assumption that there is a sample
occurring at exactly time=0.
Extracting data in other forms
The :meth:~mne.Epochs.get_data method returns the epoched data as a
:class:NumPy array <numpy.ndarray>, of shape (n_epochs, n_channels,
n_times); an optional picks parameter selects a subset of channels by
index, name, or type:
End of explanation
df = epochs.to_data_frame(index=['condition', 'epoch', 'time'])
df.sort_index(inplace=True)
print(df.loc[('auditory/left', slice(0, 10), slice(100, 107)),
'EEG 056':'EEG 058'])
del df
Explanation: Note that if your analysis requires repeatedly extracting single epochs from
an :class:~mne.Epochs object, epochs.get_data(item=2) will be much
faster than epochs[2].get_data(), because it avoids the step of
subsetting the :class:~mne.Epochs object first.
You can also export :class:~mne.Epochs data to :class:Pandas DataFrames
<pandas.DataFrame>. Here, the :class:~pandas.DataFrame index will be
constructed by converting the time of each sample into milliseconds and
rounding it to the nearest integer, and combining it with the event types and
epoch numbers to form a hierarchical :class:~pandas.MultiIndex. Each
channel will appear in a separate column. Then you can use any of Pandas'
tools for grouping and aggregating data; for example, here we select any
epochs numbered 10 or less from the auditory/left condition, and extract
times between 100 and 107 ms on channels EEG 056 through EEG 058
(note that slice indexing within Pandas' :obj:~pandas.DataFrame.loc is
inclusive of the endpoint):
End of explanation
epochs.save('saved-audiovisual-epo.fif', overwrite=True)
epochs_from_file = mne.read_epochs('saved-audiovisual-epo.fif', preload=False)
Explanation: See the tut-epochs-dataframe tutorial for many more examples of the
:meth:~mne.Epochs.to_data_frame method.
Loading and saving Epochs objects to disk
:class:~mne.Epochs objects can be loaded and saved in the .fif format
just like :class:~mne.io.Raw objects, using the :func:mne.read_epochs
function and the :meth:~mne.Epochs.save method. Functions are also
available for loading data that was epoched outside of MNE-Python, such as
:func:mne.read_epochs_eeglab and :func:mne.read_epochs_kit.
End of explanation
print(type(epochs))
print(type(epochs_from_file))
Explanation: The MNE-Python naming convention for epochs files is that the file basename
(the part before the .fif or .fif.gz extension) should end with
-epo or _epo, and a warning will be issued if the filename you
provide does not adhere to that convention.
As a final note, be aware that the class of the epochs object is different
when epochs are loaded from disk rather than generated from a
:class:~mne.io.Raw object:
End of explanation
print(all([isinstance(epochs, mne.BaseEpochs),
isinstance(epochs_from_file, mne.BaseEpochs)]))
Explanation: In almost all cases this will not require changing anything about your code.
However, if you need to do type checking on epochs objects, you can test
against the base class that these classes are derived from:
End of explanation
for epoch in epochs[:3]:
print(type(epoch))
Explanation: Iterating over Epochs
Iterating over an :class:~mne.Epochs object will yield :class:arrays
<numpy.ndarray> rather than single-trial :class:~mne.Epochs objects:
End of explanation
for index in range(3):
print(type(epochs[index]))
Explanation: If you want to iterate over :class:~mne.Epochs objects, you can use an
integer index as the iterator:
End of explanation |
15,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Set up rotation matrices representing a 3-1-3 $(\psi,\theta,\phi)$ Euler angle set.
Step1: $\tilde{\omega} = {}^\mathcal{B}C^{\mathcal{I}} {}^\mathcal{B}{\dot{C}}^{\mathcal{I}}$
Step2: $\left[{}^\mathcal{I}\boldsymbol{\omega}^{\mathcal{B}}\right]\mathcal{B} = \left[ {}^\mathcal{B}C^{\mathcal{I}}{32} \quad {}^\mathcal{B}C^{\mathcal{I}}{13} \quad {}^\mathcal{B}C^{\mathcal{I}}{21} \right]$
Step3: Find EOM (second derivatives of Euler Angles)
Step4: Find initial orientation such that $\mathbf h$ is down-pointing
Step5: Generate MATLAB Code | Python Code:
aCi = rotMat(3,psi)
cCa = rotMat(1,th)
bCc = rotMat(3,ph)
aCi,cCa,bCc
bCi = bCc*cCa*aCi; bCi #3-1-3 rotation
bCi_dot = difftotalmat(bCi,t,{th:thd,psi:psid,ph:phd});
bCi_dot
Explanation: Set up rotation matrices representing a 3-1-3 $(\psi,\theta,\phi)$ Euler angle set.
End of explanation
omega_tilde = bCi*bCi_dot.T; omega_tilde
Explanation: $\tilde{\omega} = {}^\mathcal{B}C^{\mathcal{I}} {}^\mathcal{B}{\dot{C}}^{\mathcal{I}}$
End of explanation
omega = simplify(Matrix([omega_tilde[2,1],omega_tilde[0,2],omega_tilde[1,0]]))
omega
w1,w2,w3 = symbols('omega_1,omega_2,omega_3')
s0 = solve(omega - Matrix([w1,w2,w3]),[psid,thd,phd]); s0
Explanation: $\left[{}^\mathcal{I}\boldsymbol{\omega}^{\mathcal{B}}\right]\mathcal{B} = \left[ {}^\mathcal{B}C^{\mathcal{I}}{32} \quad {}^\mathcal{B}C^{\mathcal{I}}{13} \quad {}^\mathcal{B}C^{\mathcal{I}}{21} \right]$
End of explanation
I1,I2,I3 = symbols("I_1,I_2,I_3",real=True,positive=True)
iWb_B = omega
I_G_B = diag(I1,I2,I3)
I_G_B
diffmap = {th:thd,psi:psid,ph:phd,thd:thdd,psid:psidd,phd:phdd}
diffmap
t1 = I_G_B*difftotalmat(iWb_B,t,diffmap)
t2 = skew(iWb_B)*I_G_B*iWb_B
t1,t2
dh_G_B = t1+t2
dh_G_B
t3 = expand(dh_G_B[0]*cos(ph)*I2 - dh_G_B[1]*sin(ph)*I1)
sol_thdd = simplify(solve(t3,thdd))
sol_thdd
t4= expand(dh_G_B[0]*sin(ph)*I2 + dh_G_B[1]*cos(ph)*I1)
t4
sol_psidd = simplify(solve(t4,psidd))
sol_psidd
sol_phdd = solve(dh_G_B[2],phdd)
sol_phdd
Explanation: Find EOM (second derivatives of Euler Angles)
End of explanation
h = sqrt(((I_G_B*Matrix([w1,w2,w3])).transpose()*(I_G_B*Matrix([w1,w2,w3])))[0]);h
eqs1 = simplify(bCi.transpose()*I_G_B*Matrix([w1,w2,w3]) - Matrix([0,0,-h])); eqs1 #equal 0
simplify(solve(simplify(eqs1[0]*cos(psi) + eqs1[1]*sin(psi)),ph)) #phi solution
solve(simplify(expand(simplify(-eqs1[0]*sin(psi) + eqs1[1]*cos(psi)).subs(ph,atan(I1*w1/I2/w2)))),th) #th solution
simplify(eqs1[2].subs(ph,atan(I1*w1/I2/w2)))
Explanation: Find initial orientation such that $\mathbf h$ is down-pointing
End of explanation
out = codegen(("eom1",sol_psidd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3]);out
codegen(("eom1",sol_thdd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3])
codegen(("eom1",sol_phdd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])
codegen(("eom1",[s0[psid],s0[thd],s0[phd]]), 'Octave', argument_sequence=[w1,w2,w3,th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])
codegen(("eom1",bCi), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])
codegen(("eom1",omega), 'Octave', argument_sequence=[w1,w2,w3,th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])
Explanation: Generate MATLAB Code
End of explanation |
15,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
15,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your First CAS Connection from Python
Let's start with a gentle introduction to the Python CAS client by doing some basic operations like creating a CAS connection and running a simple action. You'll need to have Python installed as well as the SWAT Python package from SAS, and you'll need a running CAS server.
We will be using Python 3 for our example. Specifically, we will be using the IPython interactive prompt (type 'ipython' rather than 'python' at your command prompt). The first thing we need to do is import SWAT and create a CAS session. We will use the name 'mycas1' for our CAS hostname and 12345 as our CAS port name. In this case, we will use username/password authentication, but other authentication mechanisms are also possible depending on your configuration.
Step1: As you can see above, we have a session on the server. It has been assigned a unique session ID and more user-friendly name. In this case, we are using the binary CAS protocol as opposed to the REST interface. We can now run CAS actions in the session. Let's begin with a simple one
Step2: The listnodes action returns a CASResults object (which is just a subclass of Python's ordered dictionary). It contains one key ('nodelist') which holds a Pandas DataFrame. We can now grab that DataFrame to do further operations on it.
Step3: Use DataFrame selection to subset the columns.
Step4: In the code above, we are doing some standard DataFrame operations using expressions to filter the DataFrame to include only worker nodes or controller nodes. Pandas DataFrames support lots of ways of slicing and dicing your data. If you aren't familiar with them, you'll want to get acquainted on the Pandas web site.
When you are finished with a CAS session, it's always a good idea to clean up. | Python Code:
# Import the SWAT package which contains the CAS interface
import swat
# Create a CAS session on mycas1 port 12345
conn = swat.CAS('mycas1', 12345, 'username', 'password')
Explanation: Your First CAS Connection from Python
Let's start with a gentle introduction to the Python CAS client by doing some basic operations like creating a CAS connection and running a simple action. You'll need to have Python installed as well as the SWAT Python package from SAS, and you'll need a running CAS server.
We will be using Python 3 for our example. Specifically, we will be using the IPython interactive prompt (type 'ipython' rather than 'python' at your command prompt). The first thing we need to do is import SWAT and create a CAS session. We will use the name 'mycas1' for our CAS hostname and 12345 as our CAS port name. In this case, we will use username/password authentication, but other authentication mechanisms are also possible depending on your configuration.
End of explanation
# Run the builtins.listnodes action
nodes = conn.listnodes()
nodes
Explanation: As you can see above, we have a session on the server. It has been assigned a unique session ID and more user-friendly name. In this case, we are using the binary CAS protocol as opposed to the REST interface. We can now run CAS actions in the session. Let's begin with a simple one: listnodes.
End of explanation
# Grab the nodelist DataFrame
df = nodes['nodelist']
df
Explanation: The listnodes action returns a CASResults object (which is just a subclass of Python's ordered dictionary). It contains one key ('nodelist') which holds a Pandas DataFrame. We can now grab that DataFrame to do further operations on it.
End of explanation
roles = df[['name', 'role']]
roles
# Extract the worker nodes using a DataFrame mask
roles[roles.role == 'worker']
# Extract the controllers using a DataFrame mask
roles[roles.role == 'controller']
Explanation: Use DataFrame selection to subset the columns.
End of explanation
conn.close()
Explanation: In the code above, we are doing some standard DataFrame operations using expressions to filter the DataFrame to include only worker nodes or controller nodes. Pandas DataFrames support lots of ways of slicing and dicing your data. If you aren't familiar with them, you'll want to get acquainted on the Pandas web site.
When you are finished with a CAS session, it's always a good idea to clean up.
End of explanation |
15,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Azure Fune tuning example
In this example we'll try to go over all operations that can be done using the Azure endpoints and their differences with the openAi endpoints (if any).<br>
This example focuses on finetuning but touches on the majority of operations that are also available using the API. This example is meant to be a quick way of showing simple operations and is not meant as a finetune model adaptation tutorial.
Step1: Setup
In the following section the endpoint and key need to be set up of the next sections to work.<br> Please go to https
Step2: Files
In the next section we will focus on the files operations
Step3: Files
Step4: Files
Step5: Files
Step6: Files
Step7: Finetune
In this section we are going to use the two training and validation files that we imported in the previous section, to train a finetune model.
Finetune
Step8: Finetune
Step9: Finetune
Step10: Finetune
Step11: Deployments
In this section we are going to create a deployment using the finetune model that we just adapted and then used the deployment to create a simple completion operation.
Deployments
Step12: Deployments
Step13: Deployments
Step14: Completions
Now let's send a sample completion to the deployment.
Step15: Deployments | Python Code:
import openai
from openai import cli
Explanation: Azure Fune tuning example
In this example we'll try to go over all operations that can be done using the Azure endpoints and their differences with the openAi endpoints (if any).<br>
This example focuses on finetuning but touches on the majority of operations that are also available using the API. This example is meant to be a quick way of showing simple operations and is not meant as a finetune model adaptation tutorial.
End of explanation
openai.api_key = '' # Please add your api key here
openai.api_base = '' # Please add your endpoint here
openai.api_type = 'azure'
openai.api_version = '2022-03-01-preview' # this may change in the future
Explanation: Setup
In the following section the endpoint and key need to be set up of the next sections to work.<br> Please go to https://portal.azure.com, find your resource and then under "Resource Management" -> "Keys and Endpoints" look for the "Endpoint" value and one of the Keys. They will act as api_base and api_key in the code below.
End of explanation
import shutil
import json
training_file_name = 'training.jsonl'
validation_file_name = 'validation.jsonl'
sample_data = [{"prompt": "When I go to the store, I want an", "completion": "apple"},
{"prompt": "When I go to work, I want a", "completion": "coffe"},
{"prompt": "When I go home, I want a", "completion": "soda"}]
print(f'Generating the training file: {training_file_name}')
with open(training_file_name, 'w') as training_file:
for entry in sample_data:
json.dump(entry, training_file)
training_file.write('\n')
print(f'Copying the training file to the validation file')
shutil.copy(training_file_name, validation_file_name)
Explanation: Files
In the next section we will focus on the files operations: importing, listing, retrieving, deleting. For this we need to create 2 temporary files with some sample data. For the sake of simplicity, we will use the same data for training and validation.
End of explanation
print('Checking for existing uploaded files.')
results = []
files = openai.File.list().data
print(f'Found {len(files)} total uploaded files in the subscription.')
for item in files:
if item["filename"] in [training_file_name, validation_file_name]:
results.append(item["id"])
print(f'Found {len(results)} already uploaded files that match our names.')
Explanation: Files: Listing
List all of the uploaded files and check for the ones that are named "training.jsonl" or "validation.jsonl"
End of explanation
print(f'Deleting already uploaded files.')
for id in results:
openai.File.delete(sid = id)
Explanation: Files: Deleting
Let's now delete those found files (if any) since we're going to be re-uploading them next.
End of explanation
import time
def check_status(training_id, validation_id):
train_status = openai.File.retrieve(training_id)["status"]
valid_status = openai.File.retrieve(validation_id)["status"]
print(f'Status (training_file | validation_file): {train_status} | {valid_status}')
return (train_status, valid_status)
#importing our two files
training_id = cli.FineTune._get_or_upload(training_file_name, True)
validation_id = cli.FineTune._get_or_upload(validation_file_name, True)
#checking the status of the imports
(train_status, valid_status) = check_status(training_id, validation_id)
while train_status not in ["succeeded", "failed"] or valid_status not in ["succeeded", "failed"]:
time.sleep(1)
(train_status, valid_status) = check_status(training_id, validation_id)
Explanation: Files: Importing & Retrieving
Now, let's import our two files ('training.jsonl' and 'validation.jsonl') and keep those IDs since we're going to use them later for finetuning.<br>
For this operation we are going to use the cli wrapper which does a bit more checks before uploading and also gives us progress. In addition, after uploading we're going to check the status our import until it has succeeded (or failed if something goes wrong)
End of explanation
print(f'Downloading training file: {training_id}')
result = openai.File.download(training_id)
print(result)
Explanation: Files: Downloading
Now let's download one of the files, the training file for example, to check that everything was in order during importing and all bits are there.
End of explanation
create_args = {
"training_file": training_id,
"validation_file": validation_id,
"model": "curie",
"compute_classification_metrics": True,
"classification_n_classes": 3
}
resp = openai.FineTune.create(**create_args)
job_id = resp["id"]
status = resp["status"]
print(f'Fine-tunning model with jobID: {job_id}.')
Explanation: Finetune
In this section we are going to use the two training and validation files that we imported in the previous section, to train a finetune model.
Finetune: Adapt
First let's create the finetune adaptation job.
End of explanation
import signal
import datetime
def signal_handler(sig, frame):
status = openai.FineTune.retrieve(job_id).status
print(f"Stream interrupted. Job is still {status}.")
return
print('Streaming events for the fine-tuning job: {job_id}')
signal.signal(signal.SIGINT, signal_handler)
events = openai.FineTune.stream_events(job_id)
try:
for event in events:
print(f'{datetime.datetime.fromtimestamp(event["created_at"])} {event["message"]}')
except Exception:
print("Stream interrupted (client disconnected).")
Explanation: Finetune: Streaming
While the job runs, we can subscribe to the streaming events to check the progress of the operation.
End of explanation
status = openai.FineTune.retrieve(id=job_id)["status"]
if status not in ["succeeded", "failed"]:
print(f'Job not in terminal status: {status}. Waiting.')
while status not in ["succeeded", "failed"]:
time.sleep(2)
status = openai.FineTune.retrieve(id=job_id)["status"]
print(f'Status: {status}')
else:
print(f'Finetune job {job_id} finished with status: {status}')
print('Checking other finetune jobs in the subscription.')
result = openai.FineTune.list()
print(f'Found {len(result)} finetune jobs.')
Explanation: Finetune: Listing and Retrieving
Now let's check that our operation was successful and in addition we can look at all of the finetuning operations using a list operation.
End of explanation
# openai.FineTune.delete(sid=job_id)
Explanation: Finetune: Deleting
Finally we can delete our finetune job.<br>
WARNING: Please skip this step if you want to continue with the next section as the finetune model is needed. (The delete code is commented out by default)
End of explanation
#Fist let's get the model of the previous job:
result = openai.FineTune.retrieve(id=job_id)
if result["status"] == 'succeeded':
model = result["fine_tuned_model"]
# Now let's create the deployment
print(f'Creating a new deployment with model: {model}')
result = openai.Deployment.create(model=model, scale_settings={"scale_type":"manual", "capacity": 1})
deployment_id = result["id"]
Explanation: Deployments
In this section we are going to create a deployment using the finetune model that we just adapted and then used the deployment to create a simple completion operation.
Deployments: Create
Let's create a deployment using the fine-tune model.
End of explanation
print(f'Checking for deployment status.')
resp = openai.Deployment.retrieve(id=deployment_id)
status = resp["status"]
print(f'Deployment {deployment_id} is with status: {status}')
Explanation: Deployments: Retrieving
Now let's check the status of the newly created deployment
End of explanation
print('While deployment running, selecting a completed one.')
deployment_id = None
result = openai.Deployment.list()
for deployment in result.data:
if deployment["status"] == "succeeded":
deployment_id = deployment["id"]
break
if not deployment_id:
print('No deployment with status: succeeded found.')
else:
print(f'Found a successful deployment with id: {deployment_id}.')
Explanation: Deployments: Listing
Now because creating a new deployment takes a long time, let's look in the subscription for an already finished deployment that succeeded.
End of explanation
print('Sending a test completion job')
start_phrase = 'When I go to the store, I want a'
response = openai.Completion.create(engine=deployment_id, prompt=start_phrase, max_tokens=4)
text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip()
print(f'"{start_phrase} {text}"')
Explanation: Completions
Now let's send a sample completion to the deployment.
End of explanation
print(f'Deleting deployment: {deployment_id}')
openai.Deployment.delete(sid=deployment_id)
Explanation: Deployments: Delete
Finally let's delete the deployment
End of explanation |
15,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parse a description into components
This notebook requires at least version 0.8.8.
Step1: We have some text
Step2: To read this with striplog, we need to define a Lexicon. This is a dictionary-like object full of regular expressions, which acts as a bridge between this unstructured description and a dictionary-like Component object which striplog wants. The Lexicon also contains abbreviations for converting abbreviated text like cuttings descriptions into expanded words.
A Lexicon to read only this text might look like
Step3: Now we can parse the text with it
Step4: But this is obviously a bit of a pain to make and maintain. So instead of definining a Lexicon from scratch, we'll modify the default one
Step5: Parsing with this yields the same results as before...
Step6: ...but we can parse more things now | Python Code:
import striplog
striplog.__version__
Explanation: Parse a description into components
This notebook requires at least version 0.8.8.
End of explanation
text = "wet silty fine sand with tr clay"
Explanation: We have some text:
End of explanation
from striplog import Lexicon
lex_dict = {
'lithology': ['sand', 'clay'],
'grainsize': ['fine'],
'modifier': ['silty'],
'amount': ['trace'],
'moisture': ['wet', 'dry'],
'abbreviations': {'tr': 'trace'},
'splitters': ['with'],
'parts_of_speech': {'noun': ['lithology'],
'adjective': ['grainsize', 'modifier', 'moisture'],
'subordinate': ['amount'],
}
}
lexicon = Lexicon(lex_dict)
Explanation: To read this with striplog, we need to define a Lexicon. This is a dictionary-like object full of regular expressions, which acts as a bridge between this unstructured description and a dictionary-like Component object which striplog wants. The Lexicon also contains abbreviations for converting abbreviated text like cuttings descriptions into expanded words.
A Lexicon to read only this text might look like:
End of explanation
from striplog import Interval
Interval._parse_description(text, lexicon=lexicon, max_component=3, abbreviations=True)
Explanation: Now we can parse the text with it:
End of explanation
# Make and expand the lexicon.
lexicon = Lexicon.default()
# Add moisture words (or could add as other 'modifiers').
lexicon.moisture = ['wet(?:tish)?', 'dry(?:ish)?']
lexicon.parts_of_speech['adjective'] += ['moisture']
# Add the comma as component splitter.
lexicon.splitters += [', ']
Explanation: But this is obviously a bit of a pain to make and maintain. So instead of definining a Lexicon from scratch, we'll modify the default one:
End of explanation
Interval._parse_description(text, lexicon=lexicon, max_component=3)
Explanation: Parsing with this yields the same results as before...
End of explanation
Interval._parse_description("Coarse sandstone with minor limestone", lexicon=lexicon, max_component=3)
Explanation: ...but we can parse more things now:
End of explanation |
15,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repsly trial data
Step1: Let's see what the data looks like
Step2: As you can see above, each input vector X has 1+15*16=241 values, most of which are zeros. The first one is the trial start date as offset from 2016-01-01 and the rest is different usage parameters for the following 16 days. Data provided by batch read is randomly shuffled. Output values are stored in y and they represent if the user purchased the Repsly service after the trial or not.
Training
We will use Ensamble class for training and cross validation
Step3: We will train the best candidates a little bit more | Python Code:
from repsly_data import RepslyData
repsly_data = RepslyData()
print('Reading data (this might take a minute or so)...', end='')
repsly_data.read_data('data/trial_users_analysis.csv', mode='FC')
print('done.')
Explanation: Repsly trial data
End of explanation
read_batch = repsly_data.read_batch(batch_size=20)
X, y = next(read_batch)
print('X{}: {}'.format(list(X.shape), X))
print('y:', y)
Explanation: Let's see what the data looks like:
End of explanation
from repsly_nn import RepslyFC
from ensamble import Ensamble
ens = Ensamble()
arch = {
'no_of_layers': {'lin': (4, 8)},
'hidden_size': {'lin': (128, 384)},
'use_batch_norm': 'True',
'keep_prob': {'lin': (0.3, 0.70, 2)},
'input_keep_prob': {'lin': (0.65, 0.95, 2)},
'batch_norm_decay': 0.99 # {'inv-log': (0.9, 0.99, 2)},
}
learning_dict = {
'learning_rate': 0.001,
'decay_steps': 20,
'decay_rate': 0.99 #{'inv-log': (0.99, 0.999, 3)}
}
train_dict = {
'batch_size': 512,
'epochs': 100,
'skip_steps': 20
}
key='f1_score'
no_of_nets = 5
no_of_loops = 50
for _ in range(no_of_loops):
ens.add_nets(RepslyFC, arch=arch, data=repsly_data, learning_dict=learning_dict, no_of_nets=no_of_nets)
ens.train_untrained(train_dict)
ens.print_stat_by_key('f1_score')
Explanation: As you can see above, each input vector X has 1+15*16=241 values, most of which are zeros. The first one is the trial start date as offset from 2016-01-01 and the rest is different usage parameters for the following 16 days. Data provided by batch read is randomly shuffled. Output values are stored in y and they represent if the user purchased the Repsly service after the trial or not.
Training
We will use Ensamble class for training and cross validation
End of explanation
no_of_top_nets = 0
no_of_loops = 0
for _ in range(no_of_loops):
ens.train_top_nets_by_key_stat(key, no_of_top_nets, train_dict)
ens.print_stat_by_key('f1_score')
ens.print_stat_by_key('f1_score')
ens.print_stat_by_key('loss', reverse=True)
arch = {
'no_of_layers': 6,
'hidden_size': 256,
'use_batch_norm': 'True',
'keep_prob': 0.68,
'input_keep_prob': 0.72,
'batch_norm_decay': 0.99
}
learning_dict = {
'learning_rate': 0.001,
'decay_steps': 20,
'decay_rate': 0.99
}
train_dict = {
'batch_size': 512,
'epochs': 100,
'skip_steps': 20
}
Explanation: We will train the best candidates a little bit more:
End of explanation |
15,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 3
Imports
Step1: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
Step7: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 3
Imports
End of explanation
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
Explanation: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t)
$$
In this equation:
$a$ governs the strength of the damping.
$b$ governs the strength of the driving force.
$\omega_0$ is the angular frequency of the driving force.
When $a=0$ and $b=0$, the energy/mass is conserved:
$$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$
Basic setup
Here are the basic parameters we are going to use for this exercise:
End of explanation
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
# YOUR CODE HERE
theta = y[0]
omega = y[1]
d2theta = -(g/l) * np.sin(theta)
d2omega = -(g/l) * np.sin(theta) - a*omega - b*np.sin(omega0*t)
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the two plots and their tuning of atol, rtol.
Explanation: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
End of explanation
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
End of explanation
plot_pendulum(0.5, 0.0, 0.0)
Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
End of explanation |
15,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtrage de Kalman
Loi conditionnelle gaussienne
Soit $Z=\left(\begin{matrix}X \ Y\end{matrix}\right)$ un vecteur aléatoire gaussien à valeurs dans $\mathbb R^{n+d}$ de moyenne $\bar Z$ et de covariance $Q_Z$ avec
Step1: Système linéaire gaussien en tems discret
$$
X_{k+1} = A\,X_{k} + B\,W_k\,,\ 0\leq k < k_{max}
$$
$X_k\to\mathbb{R}^n$
bruit
Step2: Un peu de vectorisation
Step3: Filtrage linéaire gaussien
\begin{align}
\tag{équation d'état}
X_{k+1} &= A\,X_{k} + B\,W_k\,\ 0\leq k<k_{max}
\
\tag{équation d'observation}
Y_{k} &= H\,X_{k} + V_k \,\ 0< k\leq k_{max}
\end{align}
$X_k\to\mathbb{R}^n$, $Y_k\to\mathbb{R}^d$
bruit d'état | Python Code:
%matplotlib inline
from ipywidgets import interact, fixed
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
barZ = np.array([[1],[3]])
QZ = np.array([[3,1],[1,1]])
a = barZ[0]
b = QZ[0,0]
xx = np.linspace(-6, 10, 100)
R = QZ[0,0]-QZ[0,1]*QZ[0,1]/QZ[1,1]
def pltbayesgauss(obs):
hatX = barZ[0]+QZ[0,1]*(obs-barZ[1])/QZ[1,1]
plt.plot([obs,obs],[0,1],':')
plt.plot(xx, stats.norm.pdf(xx, a, b),label='loi a priori')
plt.plot(xx, stats.norm.pdf(xx, hatX, R),label='loi a posteriori')
plt.ylim([0,0.25])
plt.legend()
plt.show()
interact(pltbayesgauss, obs=(-6,10,0.1))
plt.show()
Explanation: Filtrage de Kalman
Loi conditionnelle gaussienne
Soit $Z=\left(\begin{matrix}X \ Y\end{matrix}\right)$ un vecteur aléatoire gaussien à valeurs dans $\mathbb R^{n+d}$ de moyenne $\bar Z$ et de covariance $Q_Z$ avec:
\begin{align}
\bar Z &= \begin{pmatrix}\bar X \ \bar Y \end{pmatrix}
&
Q_Z &=\begin{pmatrix} Q_{X X} & Q_{XY} \ Q_{XY}^ &
Q_{YY}\end{pmatrix}
\end{align*}
où $Q_{YY}>0$ alors $X|Y$ est gaussien $N(\widehat{X},R)$ avec:
\begin{align}
\widehat{X} &= \bar X + Q_{XY}\,Q_{YY}^{-1}\,(Y-\bar Y) \
R &= Q_{XX}-Q_{XY}\,Q_{YY}^{-1}\,Q_{XY}^ \hskip 2em\textrm{(ne dépend pas de l'observation)}
\end{align*}
C'est un problème bayésien gaussien:
- avant observation: notre connaissance de $X$ est $N(\bar X,Q_{X X})$
- après observation: notre connaissance de $X$ est $N(\widehat{X},R)$
et on espère $Q_{X X}>R$
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
kmax = 300
EX0, VX0 = 10, 5
A, B, QW = 0.9, 1, 0.1
sQW = np.sqrt(QW)
sVX0 = np.sqrt(VX0)
def sys_lin(EX0, sVX0, A, B, sQW):
W = sQW*np.random.randn(kmax)
X = np.ones(kmax+1)
X[0] = EX0+sVX0*np.random.randn()
for k in range(kmax):
X[k+1] = A*X[k]+B*W[k]
return X
def sys_lin_loi(EX0, sVX0, A, B, sQW):
espX = np.ones(kmax+1)
varX = np.ones(kmax+1)
espX[0] = EX0
for k in range(kmax):
espX[k+1] = A*espX[k]
varX[k+1] = A*A*varX[k]+B*B*QW
return espX, varX
X = sys_lin(EX0, sVX0, A, B, sQW)
espX, varX = sys_lin_loi(EX0, sVX0, A, B, sQW)
plt.plot([0, kmax], [0, 0], color="g", linestyle=':')
plt.plot(espX,color='k')
plt.fill_between(range(kmax+1),espX+2*np.sqrt(varX),
espX-2*np.sqrt(varX), color = '0.75', alpha=0.4)
plt.plot(X)
plt.show()
from ipywidgets import interact, fixed
def plt_sys_lin(A, B, iseed):
np.random.seed(iseed)
X = sys_lin(10, 0, A, B, 1)
plt.plot([0, kmax], [0, 0], color="g", linestyle=':')
plt.plot(X)
plt.ylim([-4,15])
plt.show()
interact(plt_sys_lin, A=(0,1,0.01), B=(0.,6,0.1),
iseed=(1,100,1))
plt.show()
Explanation: Système linéaire gaussien en tems discret
$$
X_{k+1} = A\,X_{k} + B\,W_k\,,\ 0\leq k < k_{max}
$$
$X_k\to\mathbb{R}^n$
bruit: $W_k\to\mathbb{R}^m$, $W_k\sim N(0,Q_W)$
$A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{m\times n}$
Il s'agit d'un système gaussien: $X_{0:k_{max}}$ est un vecteur aléatoire gaussien. (Notation: $Z_{k':k}=(Z_{k'},Z_{k'+1},\dots ,Z_{k})$, $k'\leq k$).
La moyenne $\bar X_k = \mathbb{E}(X_k)$ et la covariance $Q^X_k = \mathrm{Var}(X_k)$ sont donnés par:
\begin{align}
\bar X_{k+1} &= A\,\bar X_{k}
\
Q^X_{k+1} &= A\,Q^X_{k}\,A^+B\,Q_W\,B^
\end{align}
End of explanation
kmax = 300
mcmax = 300
EX0, VX0 = 10, 5
A, B, QW = 0.9, 1, 0.1
sQW = np.sqrt(QW)
sVX0 = np.sqrt(VX0)
def sys_lin_vec(mcmax,EX0, sVX0, A, B, sQW):
W = sQW*np.random.randn(kmax,mcmax)
X = np.ones((kmax+1,mcmax))
X[0,] = EX0+sVX0*np.random.randn()
for k in range(kmax):
X[k+1,] = A*X[k,]+B*W[k,]
return X
X = sys_lin_vec(mcmax, EX0, sVX0, A, B, sQW)
plt.plot(X,alpha=.04,color='b')
plt.plot(espX,color='w')
plt.plot(espX+2*np.sqrt(varX),color='k')
plt.plot(espX-2*np.sqrt(varX),color='k')
plt.show()
mcmax = 10000
X = sys_lin_vec(mcmax, EX0, sVX0, A, B, sQW)
num_bins = 30
n, bins, patches = plt.hist(X[-1,], num_bins, normed=1,
facecolor='green', alpha=0.5)
plt.show()
Explanation: Un peu de vectorisation
End of explanation
import numpy as np
import matplotlib.pyplot as plt
kmax = 300
EX0, VX0 = 10, 5
A, B, QW = 0.9, 1, 0.1
H, QV = 1, 0.2
sQW = np.sqrt(QW)
sQV = np.sqrt(QV)
sVX0 = np.sqrt(VX0)
def sys_lin_esp_etat(EX0, sVX0, A, B, H, sQW, sQV):
W = sQW*np.random.randn(kmax)
V = sQV*np.random.randn(kmax)
X = np.ones(kmax+1)
Y = np.ones(kmax+1)
X[0] = EX0+sVX0*np.random.randn()
Y[0] = 0 # on s en moque
for k in range(kmax):
X[k+1] = A*X[k]+B*W[k]
Y[k+1] = H*X[k+1]+V[k]
return X,Y
def kalman(EX0, sVX0, A, B, H, sQW, sQV, Y):
hatX = np.ones(kmax+1)
R = np.ones(kmax+1)
hatX[0] = EX0
R[0] = sVX0*sVX0
for k in range(kmax):
# prediction
predX = A*hatX[k]
predR = A*A*R[k]+B*B*sQW*sQW
# correction
gain = predR * H / (H*predR*H+sQV*sQV)
hatX[k+1] = predX + gain * (Y[k+1]-H*predX)
R[k+1] = (1-gain*H)*predR
return hatX, R
X,Y = sys_lin_esp_etat(EX0, sVX0, A, B, H, sQW, sQV)
espX, varX = sys_lin_loi(EX0, sVX0, A, B, sQW)
hatX, R = kalman(EX0, sVX0, A, B, H, sQW, sQV, Y)
plt.fill_between(range(kmax+1),espX+2*np.sqrt(varX),
espX-2*np.sqrt(varX),
color = 'g', alpha=0.12,
label=r'$\bar X_k\pm 2\,\sqrt{Q^X_k}$ (a priori)')
plt.fill_between(range(kmax+1),hatX+2*np.sqrt(R),
hatX-2*np.sqrt(R),
color = 'r', alpha=0.12,
label=r'$\hat X_k\pm 2\,\sqrt{R_k}$ (a posteriori)')
plt.plot(X,label=r'$X_k$')
plt.plot(espX,color='g',label=r'$\bar X_k$')
plt.plot(hatX,color='r',alpha=0.5,label=r'$\hat X_k$')
plt.legend()
plt.show()
Explanation: Filtrage linéaire gaussien
\begin{align}
\tag{équation d'état}
X_{k+1} &= A\,X_{k} + B\,W_k\,\ 0\leq k<k_{max}
\
\tag{équation d'observation}
Y_{k} &= H\,X_{k} + V_k \,\ 0< k\leq k_{max}
\end{align}
$X_k\to\mathbb{R}^n$, $Y_k\to\mathbb{R}^d$
bruit d'état: $W_k\to\mathbb{R}^m$, $W_k\sim N(0,Q_W)$
bruit de mesure: $V_k\to\mathbb{R}^d$, $V_k\sim N(0,Q_V)$, $Q_V>0$
$A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{m\times n}$, $H\in\mathbb{R}^{n\times d}$
Il s'agit d'un système gaussien: $(X_0,\dots,X_{k_{max}},Y_1,\dots,Y_{k_{max}})$ est un vecteur aléatoire gaussien. (Notation: $Z_{k':k}=(Z_{k'},Z_{k'+1},\dots ,Z_{k})$, $k'\leq k$).
Filtrage: On veut estimer l'état caché à l'aide des observations. À l'instant $k$, on dispose des observations $Y_{1:k}$ et on veut estimer $X_k$.
Filtre de Kalman
La loi de $X_k$ sachant les observations $Y_{1:k}$ est gaussienne de moyenne $\widehat{X}_k$ et de covariance $R_k$ donné par:
initialisation
- $\widehat{X}_0 \leftarrow \bar{X}_0$
- $R_0 \leftarrow Q_0$
itérations $k=1,2,3\dots$
- prédiction (calcul de la loi de $X_k|Y_{0:k-1}$)
* $\widehat{X}{k^-} \leftarrow A\,\widehat{X}{k-1} $
* $R_{k^-} \leftarrow A\,R_{k-1}\,A^ + B\,Q^W\,B^$
- correction (calcul de la loi de $X_k|Y_{0:k}$)
* $K_k \leftarrow R_{k^-}\,H^\;[ H\,R_{k^-}\,H^+Q^ V ]^{-1}$ gain
* $\widehat{X}k \leftarrow \widehat{X}{k^-} + K_k\;[ { Y_k}-H\,\widehat{X}{k^-}]$
* $R_k \leftarrow [ I-K_k\,H ]\;R{k^-}$
End of explanation |
15,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_id_text =list()
for line in source_text.split('\n'):
source_id_text.append([source_vocab_to_int[word] for word in line.split()])
end_of_sequence = target_vocab_to_int['<EOS>']
target_id_text =list()
for line in target_text.split('\n'):
target_id_text.append([target_vocab_to_int[word] for word in line.split()] + [end_of_sequence])
return (source_id_text, target_id_text)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
inputs = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
target_sequence_length = tf.placeholder(tf.int32, shape=[None], name="target_sequence_length")
max_target_length = tf.reduce_max(target_sequence_length, name="max_target_len")
source_sequence_length = tf.placeholder(tf.int32, shape=[None], name="source_sequence_length")
return inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
removed_end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
target_batch = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), removed_end], axis=1)
return target_batch
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# Stacked LSTMs
def make_cell(rnn_size):
lstm = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# Get output and state
enc_output, enc_state = tf.nn.dynamic_rnn(cell, enc_embed_input,
sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
encoder_state,
output_layer)
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_summary_length)
return training_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32),
[batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
encoder_state,
output_layer)
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def make_cell(rnn_size):
lstm = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_decoder_output = decoding_layer_train(encoder_state, dec_cell,
dec_embed_input, target_sequence_length,
max_target_sequence_length, output_layer,
keep_prob)
with tf.variable_scope("decode", reuse=True):
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell,
dec_embeddings, start_of_sequence_id,
end_of_sequence_id,
max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state,
target_sequence_length,
max_target_sentence_length,
rnn_size, num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size, keep_prob,
dec_embedding_size)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 200
# Number of Layers
num_layers = 5
# Embedding Size
encoding_embedding_size = 64
decoding_embedding_size = 64
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.6
display_step = 100
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
15,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
See the FieldTrip website_ for a caveat regarding
the possible interpretation of "significant" clusters.
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Find the FieldTrip neighbor definition to setup sensor connectivity
Step4: Compute permutation statistic
How does it work? We use clustering to bind together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size. For more background read
Step5: Note. The same functions work with source estimate. The only differences
are the origin of the data, the size, and the connectivity definition.
It can be used for single trials or for groups of subjects.
Visualize clusters | Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mne.viz import plot_topomap
import mne
from mne.stats import spatio_temporal_cluster_test
from mne.datasets import sample
from mne.channels import find_ch_connectivity
from mne.viz import plot_compare_evokeds
print(__doc__)
Explanation: Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
See the FieldTrip website_ for a caveat regarding
the possible interpretation of "significant" clusters.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30, fir_design='firwin')
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id)
X = [epochs[k].get_data() for k in event_id] # as 3D matrix
X = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering
Explanation: Read epochs for the channel of interest
End of explanation
connectivity, ch_names = find_ch_connectivity(epochs.info, ch_type='mag')
print(type(connectivity)) # it's a sparse matrix!
plt.imshow(connectivity.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
plt.xlabel('{} Magnetometers'.format(len(ch_names)))
plt.ylabel('{} Magnetometers'.format(len(ch_names)))
plt.title('Between-sensor adjacency')
Explanation: Find the FieldTrip neighbor definition to setup sensor connectivity
End of explanation
# set cluster threshold
threshold = 50.0 # very high, but the test is quite sensitive on this data
# set family-wise p-value
p_accept = 0.01
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=threshold, tail=1,
n_jobs=1, buffer_size=None,
connectivity=connectivity)
T_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
Explanation: Compute permutation statistic
How does it work? We use clustering to bind together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size. For more background read:
Maris/Oostenveld (2007), "Nonparametric statistical testing of EEG- and
MEG-data" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.
doi:10.1016/j.jneumeth.2007.03.024
End of explanation
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# get sensor positions via layout
pos = mne.find_layout(epochs.info).pos
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = T_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
image, _ = plot_topomap(f_map, pos, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False)
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='auto')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
Explanation: Note. The same functions work with source estimate. The only differences
are the origin of the data, the size, and the connectivity definition.
It can be used for single trials or for groups of subjects.
Visualize clusters
End of explanation |
15,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature extractor setup
This line constructs a "feature extractor" that uses Wikipedia's API to solve dependencies.
Step1: Using the extractor to extract features
The following line demonstrates a simple feature extraction. Note that we wrap the call in a list() because it returns a generator.
Step2: Defining a custom feature
The next block defines a new feature and sets the dependencies to be two other features
Step3: There's easier ways that we can do this though. I've overloaded simple mathematical operators to allow you to do simple math with feature and get a feature returned. This code roughly corresponds to what's going on above.
Step4: Using datasources
There's a also a set of datasources that are part of the dependency injection system. See revscoring/revscoring/datasources. I'll need to rename the diff datasource when I import it because of the name clash. FWIW, you usually don't use features and datasources in the same context, so there's some name overlap.
Step5: OK. Let's define a new feature for counting the number of templates added. I'll make use of mwparserfromhell to do this. See the docs.
Step6: Debugging
There's some facilities in place to help you make sense of issues when they arise. The most important is the draw function.
Step7: In the tree structure above, you can see how our new feature depends on "diff.added_segments" which depends on "diff.operations" which depends (as you might imaging) on the current and parent revision. Other features are a bit more complicated. | Python Code:
extractor = APIExtractor(api.Session("https://en.wikipedia.org/w/api.php"))
Explanation: Feature extractor setup
This line constructs a "feature extractor" that uses Wikipedia's API to solve dependencies.
End of explanation
list(extractor.extract(123456789, [diff.chars_added]))
Explanation: Using the extractor to extract features
The following line demonstrates a simple feature extraction. Note that we wrap the call in a list() because it returns a generator.
End of explanation
chars_added_ratio = Feature("diff.chars_added_ratio",
lambda a,c: a/max(c, 1), # Prevents divide by zero
depends_on=[diff.chars_added, revision.chars],
returns=float)
list(extractor.extract(123456789, [chars_added_ratio]))
Explanation: Defining a custom feature
The next block defines a new feature and sets the dependencies to be two other features: diff.chars_added and revision.chars. This feature represents the proportion of characters in the current version of the page that the current edit is responsible for adding.
End of explanation
chars_added_ratio = diff.chars_added / modifiers.max(revision.chars, 1) # Prevents divide by zero
list(extractor.extract(123456789, [chars_added_ratio]))
Explanation: There's easier ways that we can do this though. I've overloaded simple mathematical operators to allow you to do simple math with feature and get a feature returned. This code roughly corresponds to what's going on above.
End of explanation
from revscoring.datasources import diff as diff_datasource
list(extractor.extract(662953550, [diff_datasource.added_segments]))
Explanation: Using datasources
There's a also a set of datasources that are part of the dependency injection system. See revscoring/revscoring/datasources. I'll need to rename the diff datasource when I import it because of the name clash. FWIW, you usually don't use features and datasources in the same context, so there's some name overlap.
End of explanation
import mwparserfromhell as mwp
templates_added = Feature("diff.templates_added",
lambda add_segments: sum(len(mwp.parse(s).filter_templates()) > 0 for s in add_segments),
depends_on=[diff_datasource.added_segments],
returns=int)
list(extractor.extract(662953550, [templates_added]))
Explanation: OK. Let's define a new feature for counting the number of templates added. I'll make use of mwparserfromhell to do this. See the docs.
End of explanation
from revscoring.dependent import draw
draw(templates_added)
Explanation: Debugging
There's some facilities in place to help you make sense of issues when they arise. The most important is the draw function.
End of explanation
draw(diff.added_badwords_ratio)
Explanation: In the tree structure above, you can see how our new feature depends on "diff.added_segments" which depends on "diff.operations" which depends (as you might imaging) on the current and parent revision. Other features are a bit more complicated.
End of explanation |
15,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
W8 Lab Assignment
Step1: Ratio and logarithm
If you use linear scale to visualize ratios, it can be very misleading.
Let's first create some ratios.
Step2: Plot on the linear scale using the scatter() function.
Step3: Plot on the log scale.
Step4: What do you see from the two plots? Why do we need to use log scale to visualize ratios?
Let's practice this using random numbers. Generate 10 random numbers between [0,1], calculate the ratios between two consecutive numbers (the second number divides by the first, and so on), and plot the ratios on the linear and log scale.
Step5: Log-bin
Let's first see what the histogram looks like if we do not use the log scale.
Step6: As we can see, most votes fall in the first bin, and we cannot see the values from the second bin.
How about plotting on the log scale?
Step7: Change the number of bins to 1000.
Step8: Now, let's try log-bin. Recall that when plotting histgrams we can specify the edges of bins through the bins parameter. For example, we can specify the edges of bins to [1, 2, 3, ... , 10] as follows.
Step9: Here, we can specify the edges of bins in a similar way. Instead of specifying on the linear scale, we do it on the log space. Some useful resources
Step10: Now we can plot histgram with log-bin.
Step11: KDE
Import the IMDb data.
Step12: We can plot histogram and KDE using pandas
Step13: Or using seaborn
Step14: Can you plot the histogram and KDE of the log of movie votes?
Step15: We can get a random sample using pandas' sample() function. The kdeplot() function in seaborn provides many options (like kernel types) to do KDE.
Step16: Regression
Remember Anscombe's quartet? Let's plot the four datasets and do linear regression, which can be done with scipy's linregress() function.
TODO
Step17: Actually, the dataset is included in seaborn and we can load it.
Step18: All four datasets are in this single data frame and the 'dataset' indicator is one of the columns. This is a form often called tidy data, which is easy to manipulate and plot. In tidy data, each row is an observation and columns are the properties of the observation. Seaborn makes use of the tidy form.
We can show the linear regression results for each eadataset. Here is the example
Step19: What do these parameters mean? The documentation for the lmplot() is here.
Step20: 2-D scatter plot and KDE
Select movies released in the 1990s
Step21: We can draw a scatter plot of movie votes and ratings using the scatter() function.
Step22: Too many data points. We can decrease symbol size, set symbols empty, and make them transparent.
Step23: Number of votes is broadly distributed. So set the x axis to log scale.
Step24: We can combine scatter plot with 1D histogram using seaborn's jointplot() function.
Step25: Hexbin
There are too many data points. We need to bin them, which can be done by using the jointplot() and setting the kind parameter.
Step26: KDE
We can also do 2D KDE using seaborn's kdeplot() function.
Step27: Or using jointplot() by setting the kind parameter. | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
import scipy.stats as ss
import warnings
warnings.filterwarnings("ignore")
sns.set_style('white')
%matplotlib inline
Explanation: W8 Lab Assignment
End of explanation
x = np.array([1, 1, 1,1, 10, 100, 1000])
y = np.array([1000, 100, 10, 1, 1, 1, 1])
ratio = x/y
print(ratio)
Explanation: Ratio and logarithm
If you use linear scale to visualize ratios, it can be very misleading.
Let's first create some ratios.
End of explanation
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 ) # plot the line ratio = 1
Explanation: Plot on the linear scale using the scatter() function.
End of explanation
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.yscale('log')
plt.ylim( (0.0001,10000) ) # set the scope the y axis
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 )
Explanation: Plot on the log scale.
End of explanation
# TODO: generate random numbers and calculate ratios between two consecutive numbers
x = np.random.rand(10)
print(x)
ratio = [ i/j for i,j in zip(x[1:],x[:-1]) ]
print(ratio)
# TODO: plot the ratios on the linear scale
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 )
# TODO: plot the ratios on the log scale
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.yscale('log')
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 )
Explanation: What do you see from the two plots? Why do we need to use log scale to visualize ratios?
Let's practice this using random numbers. Generate 10 random numbers between [0,1], calculate the ratios between two consecutive numbers (the second number divides by the first, and so on), and plot the ratios on the linear and log scale.
End of explanation
# TODO: plot the histogram of movie votes
movie_df = pd.read_csv('imdb.csv', delimiter='\t')
plt.hist(movie_df['Votes'])
Explanation: Log-bin
Let's first see what the histogram looks like if we do not use the log scale.
End of explanation
# TODO: change the y scale to log
plt.hist(movie_df['Votes'])
plt.yscale('log')
Explanation: As we can see, most votes fall in the first bin, and we cannot see the values from the second bin.
How about plotting on the log scale?
End of explanation
# TODO: set the bin number to 1000
plt.hist(movie_df['Votes'], bins=1000)
plt.yscale('log')
Explanation: Change the number of bins to 1000.
End of explanation
plt.hist( movie_df['Rating'], bins=range(0,11) )
Explanation: Now, let's try log-bin. Recall that when plotting histgrams we can specify the edges of bins through the bins parameter. For example, we can specify the edges of bins to [1, 2, 3, ... , 10] as follows.
End of explanation
# TODO: specify the edges of bins using np.logspace
bins = np.logspace( np.log10(min(movie_df['Votes'])), np.log10(max(movie_df['Votes'])), 20)
Explanation: Here, we can specify the edges of bins in a similar way. Instead of specifying on the linear scale, we do it on the log space. Some useful resources:
Google query: python log-bin
numpy.logspace
numpy.linspace vs numpy.logspace
Hint: since $10^{\text{start}} = \text{min_votes}$, $\text{start} = \log_{10}(\text{min_votes})$
End of explanation
plt.hist(movie_df['Votes'], bins=bins)
plt.xscale('log')
# TODO: correct the plot
plt.hist(movie_df['Votes'], bins=bins, normed=True)
plt.xscale('log')
plt.yscale('log')
Explanation: Now we can plot histgram with log-bin.
End of explanation
movie_df = pd.read_csv('imdb.csv', delimiter='\t')
movie_df.head()
Explanation: KDE
Import the IMDb data.
End of explanation
movie_df['Rating'].hist(bins=10, normed=True)
movie_df['Rating'].plot(kind='kde')
Explanation: We can plot histogram and KDE using pandas:
End of explanation
sns.distplot(movie_df['Rating'], bins=10)
Explanation: Or using seaborn:
End of explanation
# TODO: implement this using pandas
logs = np.log(movie_df['Votes'])
logs.hist(bins=10, normed=True)
logs.plot(kind='kde')
plt.xlim(0, 25)
# TODO: implement this using seaborn
sns.distplot(logs, bins=10)
Explanation: Can you plot the histogram and KDE of the log of movie votes?
End of explanation
f = plt.figure(figsize=(15,8))
plt.xlim(0, 10)
sample_sizes = [10, 50, 100, 500, 1000, 10000]
for i, N in enumerate(sample_sizes, 1):
plt.subplot(2,3,i)
plt.title("Sample size: {}".format(N))
for j in range(5):
s = movie_df['Rating'].sample(N)
sns.kdeplot(s, kernel='gau', legend=False)
Explanation: We can get a random sample using pandas' sample() function. The kdeplot() function in seaborn provides many options (like kernel types) to do KDE.
End of explanation
X1 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0]
Y1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]
X2 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0]
Y2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]
X3 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0]
Y3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]
X4 = [8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 19.0, 8.0, 8.0, 8.0]
Y4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89]
data = [ (X1,Y1),(X2,Y2),(X3,Y3),(X4,Y4) ]
plt.figure(figsize=(10,8))
for i,p in enumerate(data, 1):
X, Y = p[0], p[1]
plt.subplot(2, 2, i)
plt.scatter(X, Y, s=30, facecolor='#FF4500', edgecolor='#FF4500')
slope, intercept, r_value, p_value, std_err = ss.linregress(X, Y)
plt.plot([0, 20], [intercept, slope*20+intercept], color='#1E90FF') #plot the fitted line Y = slope * X + intercept
# TODO: display the fitted equations using the text() function.
plt.text(2, 11, r'$Y = {:1.2f} \cdot X + {:1.2f}$'.format(slope,intercept))
plt.xlim(0,20)
plt.xlabel('X'+str(i))
plt.ylabel('Y'+str(i))
Explanation: Regression
Remember Anscombe's quartet? Let's plot the four datasets and do linear regression, which can be done with scipy's linregress() function.
TODO: display the fitted equations using the text() function.
End of explanation
df = sns.load_dataset("anscombe")
df.head()
Explanation: Actually, the dataset is included in seaborn and we can load it.
End of explanation
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 50, "alpha": 1})
Explanation: All four datasets are in this single data frame and the 'dataset' indicator is one of the columns. This is a form often called tidy data, which is easy to manipulate and plot. In tidy data, each row is an observation and columns are the properties of the observation. Seaborn makes use of the tidy form.
We can show the linear regression results for each eadataset. Here is the example:
End of explanation
sns.lmplot(x="y", y="x", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 25, "alpha": 0.8})
Explanation: What do these parameters mean? The documentation for the lmplot() is here.
End of explanation
geq = movie_df['Year'] >= 1990
leq = movie_df['Year'] <= 1999
subset = movie_df[ geq & leq ]
subset.head()
Explanation: 2-D scatter plot and KDE
Select movies released in the 1990s:
End of explanation
plt.scatter(subset['Votes'], subset['Rating'])
plt.xlabel('Votes')
plt.ylabel('Rating')
Explanation: We can draw a scatter plot of movie votes and ratings using the scatter() function.
End of explanation
plt.scatter(subset['Votes'], subset['Rating'], s=20, alpha=0.6, facecolors='none', edgecolors='b')
plt.xlabel('Votes')
plt.ylabel('Rating')
Explanation: Too many data points. We can decrease symbol size, set symbols empty, and make them transparent.
End of explanation
plt.scatter(subset['Votes'], subset['Rating'], s=10, alpha=0.6, facecolors='none', edgecolors='b')
plt.xscale('log')
plt.xlabel('Votes')
plt.ylabel('Rating')
Explanation: Number of votes is broadly distributed. So set the x axis to log scale.
End of explanation
sns.jointplot(np.log(subset['Votes']), subset['Rating'])
Explanation: We can combine scatter plot with 1D histogram using seaborn's jointplot() function.
End of explanation
# TODO: draw a joint plot with hexbins and two histograms for each marginal distribution
sns.jointplot(np.log(subset['Votes']), subset['Rating'], kind='hexbin')
Explanation: Hexbin
There are too many data points. We need to bin them, which can be done by using the jointplot() and setting the kind parameter.
End of explanation
sns.kdeplot(np.log(subset['Votes']), subset['Rating'], cmap="Reds", shade=True, shade_lowest=False)
Explanation: KDE
We can also do 2D KDE using seaborn's kdeplot() function.
End of explanation
# TODO: draw a joint plot with bivariate KDE as well as marginal distributions with KDE
sns.jointplot(np.log(subset['Votes']), subset['Rating'], kind='kde', shade_lowest=False)
Explanation: Or using jointplot() by setting the kind parameter.
End of explanation |
15,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How many user talk comments before first attacking comment?
Step1: Anons produce far fewer comments before their first attack than registered users. This could indicate that anons are more likely to by "drive-by" vandals. Or just be a product of anons not staying around very long in general.
How long are users around before their first block event? (Change for first attack and move)
TODO | Python Code:
t = 0.5
df_first_attack = df_diffs['2015'].query('pred_recipient_score>=%s' % t).sort('rev_timestamp')\
.assign(timestamp = lambda x: x.rev_timestamp)\
.groupby(['user_text'], as_index=False).first()[['user_text', 'timestamp']]
df_counts = df_diffs['2015'].merge(df_first_attack, how = 'inner', on = 'user_text')\
.assign(delta = lambda x: (x['timestamp'] - x['rev_timestamp']).apply(lambda x: x.days) + 1)\
.query('delta >=1')
def atleast(s):
s = s.value_counts().value_counts().sort_index()
n = s.sum()
return 1 - s.cumsum()/n
s = atleast(df_counts['user_text'])
sr = atleast(df_counts.query('not author_anon')['user_text'])
sa = atleast(df_counts.query('author_anon')['user_text'])
#plt.plot(s.head(20), label = '')
plt.plot(sr.head(20), label = 'registered')
plt.plot(sa.head(20), label = 'anon')
plt.xlabel('n comments')
plt.ylabel('fraction of users who made > n comments')
plt.legend()
Explanation: How many user talk comments before first attacking comment?
End of explanation
t = 0.5
d_first_post = df_diffs['2015'].sort('rev_timestamp', inplace= False)\
.groupby(['user_text', 'author_anon'], as_index=False).first()\
[['user_text', 'author_anon', 'rev_timestamp']]
df_first_attack = df_diffs['2015'].query('pred_recipient_score>=%s' % t).sort('rev_timestamp')\
.assign(timestamp = lambda x: x.rev_timestamp)\
.groupby(['user_text'], as_index=False).first()[['user_text', 'timestamp']]
dd = d_first_post.merge(df_first_block, how = 'inner', on = 'user_text')\
.assign(delta = lambda x: (x['timestamp'] - x['rev_timestamp']).apply(lambda x: x.days) + 1)\
.query('delta >=1')
def atleast(s):
s = s.value_counts().sort_index()
n = s.sum()
return 1 - s.cumsum()/n
s = atleast(d['delta'])
sr = atleast(d.query('not author_anon')['delta'])
sa = atleast(d.query('author_anon')['delta'])
#plt.plot(s.head(20), label = '')
plt.plot(sr.head(20), label = 'registered')
plt.plot(sa.head(20), label = 'anon')
plt.legend()
plt.xlabel('n days')
plt.ylabel('fraction of users active for > n days before first attack')
plt.legend()
Explanation: Anons produce far fewer comments before their first attack than registered users. This could indicate that anons are more likely to by "drive-by" vandals. Or just be a product of anons not staying around very long in general.
How long are users around before their first block event? (Change for first attack and move)
TODO: make days active instead of days since registration
End of explanation |
15,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hacking into Evolutionary Dynamics!
This Jupyter notebook implements some of the ideas in following two books, specifically chapters 1-5 in Evolutionary Dynamics. For better undrestanding of the equations and code please consult the books and relevant papers.
This notebook contains interactive contents using Javascript, please download and execute it on Jupyter.
Step1: Evolution
Basic model
\begin{align}
\dot{x} = \frac{dx}{dt} = (r-d)x(1-x/K)
\end{align}
$r$
Step2: Selection-Mutation
Selection operates whenever different types of individuals reproduce at different rates.
\begin{align}
\dot{\vec{x}} =\vec{x}Q-\phi\vec{x}.
\end{align}
$\vec{x}$
Step3: Multiple species.
Step4: Genomes are Sequences
Quasispecies equation
\begin{align}
\dot{x_i} =\sum_{j=0}^{n} x_j ~ f_j ~ q_{ji} - \phi x_i.
\end{align}
$x$
Step5: Fitness Landscape
\begin{align}
\dot{x_0} =& x_0(f_0q-\phi)\
\dot{x_1} =& x_0f_0(1-q)+x_1-\phi x_1
\end{align}
$q = (1-u)^L$
Step6: Evolutionary Games
Two player games
\begin{align}
\dot{x_A} = x_A ~ [f_A(\vec{x}) - \phi ]\
\dot{x_B} = x_B ~ [f_B(\vec{x}) - \phi ]
\end{align}
\begin{align}
f_A(\vec{x}) = a~x_A+b~x_B\
f_B(\vec{x}) = c~x_A+d~x_B
\end{align}
Payoff matrix
Step7: Prisoners Dillema
Payoff matrix
Step8: Direct Respirocity vs. Always Defect.
Tomorrow never dies!
Payoff matrix
Step9: Reactive strategies
Tit-for-Tat.
Payoff matrix | Python Code:
%%html
<div >
<iframe type="text/html" width="336" height="550" frameborder="0" allowfullscreen style="max-width:100%;float: left" src="https://lesen.amazon.de/kp/card?asin=B003UV8TC2&preview=inline&linkCode=kpe&ref_=cm_sw_r_kb_dp_MamPyb1NWT7A8" ></iframe>
</div>
<div >
<iframe type="text/html" width="336" height="550" frameborder="0" allowfullscreen style="max-width:100%;float: right" src="https://lesen.amazon.de/kp/card?asin=B00J97FFRI&preview=inline&linkCode=kpe&ref_=cm_sw_r_kb_dp_PfmPyb5ZV4AP8" ></iframe>
</div>
Explanation: Hacking into Evolutionary Dynamics!
This Jupyter notebook implements some of the ideas in following two books, specifically chapters 1-5 in Evolutionary Dynamics. For better undrestanding of the equations and code please consult the books and relevant papers.
This notebook contains interactive contents using Javascript, please download and execute it on Jupyter.
End of explanation
fig = plt.figure()
plt.close(fig)
def oneCell(r,d,max_x):
clear_output(wait=True)
t_f = 10
dt = 0.1
def int_(t,x):
dev = x*(r-d)
if max_x != None:
dev *= (1-x/max_x)
#print("dev",dev,x)
return dev
integ = integrate.ode(int_)
y = np.zeros(int(t_f/dt)+1)
x = np.zeros(int(t_f/dt)+1)
xdot = np.zeros(int(t_f/dt)+1)
integ.set_integrator("dopri5").set_initial_value(0.01)
i = 0
while integ.successful() and integ.t<t_f:
y[i] = integ.y
x[i] = integ.t
xdot[i] = int_(integ.t,y[i])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
ax.plot(x,y,label="population size")
ax.set_ylim(-0.6,3.0)
ax.set_xlabel("time")
ax.set_ylabel("population size")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=1):
ax2.plot(x, xdot, label="derivative",linestyle='--')
ax2.set_ylabel('$\dot{x}$', rotation=0)
ax2.grid('off')
ax.legend(loc=2)
ax2.legend()
ax2.set_ylim(0.,0.25)
display(fig)
return
items = [
widgets.FloatSlider(
value=1.5,
min=0,
max=2.0,
step=0.01,
description="r",layout=widgets.Layout(width='100%', height='80px'))
,widgets.FloatSlider(
value=.0,
min=0,
max=2.0,
step=0.01,
description="d",layout=widgets.Layout(width='100%', height='80px'))]
max_k = [widgets.FloatSlider(
value=1.5,
min=1,
max=2.0,
step=0.01,
description="K",layout=widgets.Layout(width='100%', height='80px')),
widgets.Checkbox(
value=False,
description="enforce K",layout=widgets.Layout(width='100%', height='80px'))]
def call_back_r(v):
if max_k[1].value is False:
return oneCell(items[0].value,items[1].value,None)
else:
return oneCell(items[0].value,items[1].value,max_k[0].value)
box_h = widgets.VBox(items,layout=widgets.Layout(width='100%', height='80px'))
box_h_max = widgets.VBox(items,layout=widgets.Layout(width='100%', height='80px'))
box = widgets.VBox([box_h]+[widgets.HBox(max_k)])
items[0].observe(call_back_r,names='value')
items[1].observe(call_back_r,names='value')
max_k[0].observe(call_back_r,names='value')
max_k[1].observe(call_back_r,names='value')
display(box)
Explanation: Evolution
Basic model
\begin{align}
\dot{x} = \frac{dx}{dt} = (r-d)x(1-x/K)
\end{align}
$r$: reproduction rate
$d$: hazard rate
$K$: Maximum capacity
End of explanation
fig = plt.figure()
plt.close(fig)
def twoCell(init_,rate):
clear_output(wait=True)
t_f = 10
dt = 0.1
update_rate = np.asarray(rate)
def int_(t,x):
dev = x.T.dot(update_rate)-x
return dev
integ = integrate.ode(int_)
y = np.zeros((int(t_f/dt)+1,update_rate.shape[0]))
x = np.zeros((int(t_f/dt)+1,update_rate.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,update_rate.shape[0]))
integ.set_integrator("dopri5").set_initial_value(np.asarray(init_))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=x.shape[1]):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label="type "+str(ind_ +1))
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("population ratio")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=x.shape[1]):
for ind_ in range(x.shape[1]):
ax2.plot(x[:,ind_], xdot[:,ind_], label="d type "+str(ind_ +1),linestyle='--')
ax2.set_ylabel('$\dot{x}$', rotation=0)
ax2.grid('off')
ax.legend(ncol=x.shape[1])
ax2.legend(loc=4,ncol=x.shape[1])
display(fig)
return
items_mute = [
widgets.IntText(
value=2,
min=2,
max=5.0,
description="r",layout=widgets.Layout(width='50%', height='80px'))
,widgets.Button(
description="submit")]
def updateplot(v,objects,status_label):
init = []
rates = []
for ind_,obj in enumerate(objects):
if ind_ < len(objects)-1:
init.append(obj[0].value)
else:
if sum(init)>1:
status_label.value = "Initial rates should sum to <1"
return
else:
status_label.value = ""
init.append(1-sum(init))
rate_ = []
for j in range(1,len(objects)):
rate_.append(obj[j].value)
if sum(rate_)>1:
status_label.value = "sum of mutation rates should sum to <1"
return
else:
status_label.value = ""
rate_.append(1-sum(rate_))
rates.append(rate_)
init = np.asarray(init)
rates = np.asarray(rates)
twoCell(init,rates)
return
def call_back_mute(count,objects,status_label,updateplot):
dsps = []
for i in range(count):
if i < count-1:
specie = [widgets.FloatSlider(
value=1.0/count,
min=0,
max=1.0,
step=0.01,
description="init "+str(i+1),layout=widgets.Layout(width='100%', height='80px'))]
else:
specie = [widgets.Label(layout=widgets.Layout(width='100%', height='80px'))]
for j in range(count-1):
wid = widgets.FloatSlider(
value=1 if j == i else 0,
min=0,
max=1.0,
step=0.01,
description="rate_"+str(i+1)+"_"+str(j+1),layout=widgets.Layout(width='100%', height='80px'))
wid.observe(updateplot,names='value')
specie.append(wid)
specie[0].observe(updateplot,names='value')
box_h = widgets.HBox(specie,layout=widgets.Layout(width='100%', height='80px'))
objects.append(specie)
dsps.append(box_h)
status_label = widgets.Label()
box_v = widgets.VBox(dsps+[status_label],layout=widgets.Layout(width='100%', height='80px'))
display(box_v)
updateplot("")
return objects
#items_mute[1].on_click(call_back_mute)
#box_h = widgets.HBox(items_mute,layout=widgets.Layout(width='100%', height='80px'))
#display(box_h)
objects = []
status_label = widgets.Label()
_ = call_back_mute(2,objects,status_label,lambda x:updateplot(x,objects,status_label))
Explanation: Selection-Mutation
Selection operates whenever different types of individuals reproduce at different rates.
\begin{align}
\dot{\vec{x}} =\vec{x}Q-\phi\vec{x}.
\end{align}
$\vec{x}$: population ratio of type $i$.
$Q$: Mutation matrix.
$\phi$: average fitness
End of explanation
objects_1 = []
status_label_1 = widgets.Label()
_ = call_back_mute(3,objects_1,status_label_1,lambda x:updateplot(x,objects_1,status_label_1))
Explanation: Multiple species.
End of explanation
fig = plt.figure()
plt.close(fig)
def genomeSequence(N,drich_alpha,point_mut):
np.random.seed(0)
clear_output(wait=True)
if point_mut is not None:
L,u = point_mut
t_f = 10
dt = 0.1
x_ = np.random.uniform(size=(N))
x_ = x_/x_.sum()
f = np.random.lognormal(size=(N))
if drich_alpha is not None:
Q = np.zeros((N,N))
for j in range(N):
Q[j,:] = np.random.dirichlet(np.roll(np.logspace(1,drich_alpha+1,N)[::-1], j), 1)
elif point_mut is not None:
Q = np.zeros((N,N))
for j in range(N):
for i in range(N):
Q[j,i] = (u**(np.abs(j-i)))*((1-u)**(L-np.abs(j-i)))
else:
print("One of the two arguments should not be None")
return
def int_(t,x):
x = np.asarray(x).reshape((x.shape[0],1))
dev = np.zeros(x.shape[0])
mean = f.dot(x)
for i in range(x.shape[0]):
for j in range(x.shape[0]):
dev[i] += f[j]*Q[j,i]*x[j]
dev[i] -= mean*x[i]
return dev
integ = integrate.ode(int_)
integ.set_integrator("dopri5").set_initial_value(np.asarray(x_))
y = np.zeros((int(t_f/dt)+1,x_.shape[0]))
x = np.zeros((int(t_f/dt)+1,x_.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,x_.shape[0]))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=2):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label=("$f_%d$: %.2f" % (ind_ +1,f[ind_])))
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("Quasi specie")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=2):
ax2.plot(np.arange(0,t_f+dt,dt),y.dot(f), label="fitness ",linestyle='-.')
ax2.set_ylabel('$f$', rotation=0)
ax2.set_ylim(0,3)
ax2.grid('off')
ax.legend(ncol=min(4,x.shape[1]))
ax2.legend(loc=4)
display(fig)
return
items_gene = [
widgets.IntSlider(
value=2,
min=2,
max=6,
description="# Genomes",layout=widgets.Layout(width='80%', height='300px')),
widgets.IntSlider(
value=10,
min=7,
max=15,
description="Max Length",layout=widgets.Layout(width='80%', height='230px')),
widgets.FloatSlider(
value=0.1,
min=0.01,
max=0.3,
step=0.05,
description="u",layout=widgets.Layout(width='80%', height='100px'))]
def _GeneCall(v):
return genomeSequence(items_gene[0].value,None,(items_gene[1].value,items_gene[2].value))
box_h = widgets.VBox(items_gene,layout=widgets.Layout(width='100%', height='80px'))
items_gene[0].observe(_GeneCall,names='value')
items_gene[1].observe(_GeneCall,names='value')
items_gene[2].observe(_GeneCall,names='value')
display(box_h)
_GeneCall(0)
Explanation: Genomes are Sequences
Quasispecies equation
\begin{align}
\dot{x_i} =\sum_{j=0}^{n} x_j ~ f_j ~ q_{ji} - \phi x_i.
\end{align}
$x$: population ratio of type $i$.
$f_i$: fitness for type $i$.
$q_{ji}$: probability of mutation from type $j$ to $i$
$q_{ji} = u^{h_ij}(1-u)^{L-h_{ij}}$ $~L:$ Length of genome. $~u:$ mutation prob. at one gene.
End of explanation
fig = plt.figure()
plt.close(fig)
def genomeSequenceQ(f_0,u,L):
np.random.seed(0)
clear_output(wait=True)
t_f = 10
dt = 0.1
x_ = np.random.uniform(size=2)
x_ = x_/x_.sum()
f = np.array([f_0,1])
q = (1-u)**L
def int_(t,x):
mean = f[0]*x[0]+f[1]*x[1]
dev = np.zeros(x.shape[0])
dev[0] = x[0]*(f[0]*q - mean)
dev[1] = x[0]*f[0]*(1-q)+x[1] - mean*x[1]
return dev
integ = integrate.ode(int_)
integ.set_integrator("dopri5").set_initial_value(np.asarray(x_))
y = np.zeros((int(t_f/dt)+1,x_.shape[0]))
x = np.zeros((int(t_f/dt)+1,x_.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,x_.shape[0]))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=2):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label=("$f_%d$: %.2f" % (ind_ ,f[ind_])))
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("Quasi specie")
ax2 = ax.twinx()
with sns.color_palette("PuBuGn_d",n_colors=2):
ax2.plot(np.arange(0,t_f+dt,dt),y.dot(f), label="fitness ",linestyle='-.')
ax2.set_ylabel('$f$', rotation=0)
ax2.set_ylim(0,10)
ax2.grid('off')
ax.legend(ncol=min(4,x.shape[1]))
ax2.legend(loc=4)
display(fig)
return q
items_geneQ = [
widgets.IntSlider(
value=5,
min=2,
max=12,
description="Genome Length",layout=widgets.Layout(width='50%', height='80px')),
widgets.FloatSlider(
value=0.05,
min=0.01,
max=0.8,
step = 0.05,
description="mutatation rate",layout=widgets.Layout(width='50%', height='80px')),
widgets.FloatSlider(
value=1,
min=0.0,
max=40,
step=0.05,
description="max_f",layout=widgets.Layout(width='50%', height='80px'))]
def _GeneCallQ(v):
q_ = genomeSequenceQ(items_geneQ[2].value,items_geneQ[1].value,items_geneQ[0].value)
label.value= "f_0 q = %.2f" % (q_*items_geneQ[2].value)
return
box_h = widgets.VBox(items_geneQ,layout=widgets.Layout(width='100%', height='120px'))
label = widgets.Label()
box_v = widgets.VBox([box_h,label])
items_geneQ[0].observe(_GeneCallQ,names='value')
items_geneQ[1].observe(_GeneCallQ,names='value')
items_geneQ[2].observe(_GeneCallQ,names='value')
display(box_v)
_GeneCallQ(0)
%%html
<center><img height="100%" width="100%" src="./Nature-coop/mutation_rates.png"/>
</center>
Explanation: Fitness Landscape
\begin{align}
\dot{x_0} =& x_0(f_0q-\phi)\
\dot{x_1} =& x_0f_0(1-q)+x_1-\phi x_1
\end{align}
$q = (1-u)^L$: probability of exact copy of master genome.
$u$: probability of a mutation on one gene.
$L$: length of genome.
End of explanation
fig = plt.figure()
plt.close(fig)
def evolutionaryGame(x_,f,labels = None):
np.random.seed(0)
clear_output(wait=True)
t_f = 10
dt = 0.1
x_ = np.asarray(x_)
x_ = np.atleast_2d(x_).T
f = np.asarray(f)
def int_(t,x):
mean = x.T.dot(f.dot(x))
dev = x*(f.dot(x)-mean)
return dev
integ = integrate.ode(int_)
integ.set_integrator("dopri5").set_initial_value(np.asarray(x_))
y = np.zeros((int(t_f/dt)+1,x_.shape[0]))
x = np.zeros((int(t_f/dt)+1,x_.shape[0]))
xdot = np.zeros((int(t_f/dt)+1,x_.shape[0]))
i = 0
while integ.successful() and integ.t<t_f:
y[i,:] = integ.y[:,0]
x[i,:] = integ.t
xdot[i,:] = int_(integ.t,y[i,:])
integ.integrate(integ.t+dt)
i=i+1
fig.clf()
ax = fig.gca()
with sns.color_palette("PuBuGn_d",n_colors=2):
for ind_ in range(x.shape[1]):
ax.plot(x[:,ind_], y[:,ind_], label="Type: %d" % (ind_+1) if labels is None else labels[ind_])
ax.set_ylim(-0.1,1.1)
ax.set_xlabel("time")
ax.set_ylabel("Quasi specie")
ax.legend(ncol=min(4,x.shape[1]))
display(fig)
items_strat = [
widgets.IntText(
value=2,
min=2,
max=5.0,
description="r",layout=widgets.Layout(width='50%', height='80px'))
,widgets.Button(
description="submit")]
def _EvolutionaryGames(v):
init = []
payoff = []
for ind_,obj in enumerate(objects_strat):
if ind_ < len(objects_strat)-1:
init.append(obj[0].value)
else:
if sum(init)>1:
status_labelstrat.value = "Initial rates should sum to <1"
return
else:
status_labelstrat.value = ""
init.append(1-sum(init))
rate_ = []
for j in range(0,len(objects_strat)):
rate_.append(obj[j+1].value)
payoff.append(rate_)
init = np.asarray(init)
payoff = np.asarray(payoff)
if len(objects_strat)==3:
status_labelstrat.value = "Determinant: %.2f" % linalg.det(payoff)
return evolutionaryGame(init,payoff)
objects_strat = []
status_labelstrat = None
box_vstrat = None
def call_back_mute(v):
global box_vstrat, status_labelstrat
if box_vstrat is not None:
box_vstrat.close()
count = items_strat[0].value
if count <2:
return
dsps = []
objects_strat[:] = []
for i in range(count):
if i < count-1:
specie = [widgets.FloatSlider(
value=1.0/count,
min=0,
max=1.0,
step=0.01,
description="init "+str(i+1),layout=widgets.Layout(width='100%', height='80px'))]
else:
specie = [widgets.Label(layout=widgets.Layout(width='100%', height='80px'))]
for j in range(count):
wid = widgets.IntSlider(
value=1,
min=-1,
max=5.0,
step=1,
description=str(chr(96+i*count+j+1)),layout=widgets.Layout(width='100%', height='80px'))
wid.observe(_EvolutionaryGames,names='value')
specie.append(wid)
specie[0].observe(_EvolutionaryGames,names='value')
box_h = widgets.HBox(specie,layout=widgets.Layout(width='100%', height='80px'))
objects_strat.append(specie)
dsps.append(box_h)
status_labelstrat = widgets.Label()
box_vstrat = widgets.VBox(dsps+[status_labelstrat],layout=widgets.Layout(width='100%', height='80px'))
display(box_vstrat)
_EvolutionaryGames("")
items_strat[1].on_click(call_back_mute)
box_h = widgets.HBox(items_strat,layout=widgets.Layout(width='100%', height='80px'))
display(box_h)
Explanation: Evolutionary Games
Two player games
\begin{align}
\dot{x_A} = x_A ~ [f_A(\vec{x}) - \phi ]\
\dot{x_B} = x_B ~ [f_B(\vec{x}) - \phi ]
\end{align}
\begin{align}
f_A(\vec{x}) = a~x_A+b~x_B\
f_B(\vec{x}) = c~x_A+d~x_B
\end{align}
Payoff matrix:
\begin{align}
\begin{pmatrix}
a & b \
c & d \
\end{pmatrix}
\end{align}
In following demo you can determine values for $a, b, c$ and $d and see how their values change determine the outcome of the game. You can also run the demo with different number of players.
End of explanation
R = 3
S = 0
T = 5
P = 1
payoff = [[R,S],[T,P]]
evolutionaryGame([0.6,0.4],payoff,["Cooperate","Defect"])
Explanation: Prisoners Dillema
Payoff matrix:
\begin{align}
\begin{pmatrix}
& C & D\
C & 3 & 0 \
D & 5 & 1 \
\end{pmatrix}
\end{align}
The Nash equilibria in this game is to always defect (D,D).
End of explanation
def _EvolutionaryGamesProb(v):
R = 3
S = 0
T = 5
P = 1
m_ = prob_tomorrow.value
payoff = [[R*m_,S+(m_-1)*P],[T+(m_-1)*P,m_*P]]
return evolutionaryGame([0.99,0.01],payoff,["GRIM","ALLD"])
prob_tomorrow = widgets.FloatSlider(
value=1,
min=0,
max=10.0,
description="m_",layout=widgets.Layout(width='100%', height='80px'))
prob_tomorrow.observe(_EvolutionaryGamesProb,names="value")
display(prob_tomorrow)
Explanation: Direct Respirocity vs. Always Defect.
Tomorrow never dies!
Payoff matrix:
\begin{align}
\begin{pmatrix}
& GRIM & ALLD\
GRIM & m3 & 0+(m-1)1 \
ALLD & 5+(m-1)1 & m1 \
\end{pmatrix}
\end{align}
Where $m$ is expected days which the game will be repeated.
if $3m > 5+(m-1)$ then GRIM is a strict Nash equilibrium when competing with ALLD.
In terms of evolutionary dynamics, if the whole population uses GRIM, then ALLD cannot invade: selection opposes ALLD at low frequency. GRIM is stable against invasion by ALLD if the number of rounds, $m$, exceeds a critical value:
\begin{align}
m> \frac{T-P}{R-P} = \frac{4}{2} = 2
\end{align}
In following widget you can play with the value of $m$ to see how the two strategies perform.
End of explanation
p_1 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="p_1",layout=widgets.Layout(width='100%', height='80px'))
q_1 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="q_1",layout=widgets.Layout(width='100%', height='80px'))
user_1 = widgets.HBox([p_1,q_1],layout=widgets.Layout(width='100%', height='80px'))
p_2 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="p_2",layout=widgets.Layout(width='100%', height='80px'))
q_2 = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
description="q_2",layout=widgets.Layout(width='100%', height='80px'))
user_2 = widgets.HBox([p_2,q_2],layout=widgets.Layout(width='100%', height='80px'))
box_pq = widgets.VBox([user_1,user_2],layout=widgets.Layout(width='100%', height='80px'))
def compute_expected_dist(p_1_v,p_2_v,q_1_v,q_2_v):
v_ = np.array([[p_1_v*p_2_v, p_1_v*(1-p_2_v), (1-p_1_v)*p_2_v, (1-p_1_v)*(1-p_2_v)],
[q_1_v*p_2_v, q_1_v*(1-p_2_v), (1-q_1_v)*p_2_v, (1-q_1_v)*(1-p_2_v)],
[p_1_v*q_2_v, p_1_v*(1-q_2_v), (1-p_1_v)*q_2_v, (1-p_1_v)*(1-q_2_v)],
[q_1_v*q_2_v, q_1_v*(1-q_2_v), (1-q_1_v)*q_2_v, (1-q_1_v)*(1-q_2_v)]]).T
w,vl = linalg.eig(v_)
return vl[:,0].real
def _EvolutionaryGamesGen(v):
p_1_v = p_1.value
p_2_v = p_2.value
q_1_v = q_1.value
q_2_v = q_2.value
p_1_1 = compute_expected_dist(p_1_v,p_1_v,q_1_v,q_1_v)
p_1_2 = compute_expected_dist(p_1_v,p_2_v,q_1_v,q_2_v)
p_2_1 = compute_expected_dist(p_2_v,p_1_v,q_2_v,q_1_v)
p_2_2 = compute_expected_dist(p_2_v,p_2_v,q_2_v,q_2_v)
R = 3
S = 0
T = 5
P = 1
#print(p_1_1)
payoff = [[R*p_1_1[0]+S*p_1_1[1]+T*p_1_1[2]+P**p_1_1[3], R*p_1_2[0]+S*p_1_2[1]+T*p_1_2[2]+P**p_1_2[3]],
[R*p_2_1[0]+S*p_2_1[1]+T*p_2_1[2]+P**p_2_1[3], R*p_2_2[0]+S*p_2_2[1]+T*p_2_2[2]+P**p_2_2[3]]]
payoff = np.array(payoff)
return evolutionaryGame([0.4,0.6],payoff,['Policy 1','Policy 2'])
p_1.observe(_EvolutionaryGamesGen,names="value")
p_2.observe(_EvolutionaryGamesGen,names="value")
q_1.observe(_EvolutionaryGamesGen,names="value")
q_2.observe(_EvolutionaryGamesGen,names="value")
display(box_pq)
Explanation: Reactive strategies
Tit-for-Tat.
Payoff matrix:
\begin{align}
\begin{pmatrix}
& CC & CD & DC & DD\
CC & p_1p_2 & p_1(1-p_2) & (1-p_1)p_2 & (1-p_1)(1-p_2) \
CD & q_1p_2 & q_1(1-p_2) & (1-q_1)p_2 & (1-q_1)(1-p_2) \
DC & p_1q_2 & p_1(1-q_2) & (1-p_1)q_2 & (1-p_1)(1-q_2) \
DD & q_1q_2 & q_1(1-q_2) & (1-q_1)q_2 & (1-q_1)(1-q_2) \
\end{pmatrix}
\end{align}
$p_1$: probability that player 1 will cooperate given that player 2 cooperated in previous round.
$p_2$: probability that player 2 will cooperate given that player 1 cooperated in previous round.
$q_1$: probability that player 1 will cooperate given that player 2 defected in previous round.
$q_2$: probability that player 2 will cooperate given that player 1 defected in previous round.
End of explanation |
15,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
用Python 3开发网络爬虫
By Terrill Yang (Github
Step1: 2. Python的集合
在爬虫程序中, 为了不重复爬那些已经爬过的网站, 我们需要把爬过的页面的url放进集合中, 在每一次要爬某一个url之前, 先看看集合里面是否已经存在. 如果已经存在, 我们就跳过这个url; 如果不存在, 我们先把url放入集合中, 然后再去爬这个页面.
Python提供了set这种数据结构. set是一种无序的, 不包含重复元素的结构. 一般用来测试是否已经包含了某元素, 或者用来对众多元素们去重. 与数学中的集合论同样, 他支持的运算有交, 并, 差, 对称差.
创建一个set可以用 set() 函数或者花括号 {} . 但是创建一个空集是不能使用一个花括号的, 只能用 set() 函数. 因为一个空的花括号创建的是一个字典数据结构. 以下同样是Python官网提供的示例.
Step2: 在我们的爬虫中, 只是用到其中的快速判断元素是否在集合内的功能, 以及集合的并运算.
3. Python的正则表达式
在爬虫程序中, 爬回来的数据是一个字符串, 字符串的内容是页面的html代码. 我们要从字符串中, 提取出页面提到过的所有url. 这就要求爬虫程序要有简单的字符串处理能力, 而正则表达式可以很轻松的完成这一任务.
参考资料
1. 正则表达式30分钟入门教程(GitBook)
2. w3cschool 的Python正则表达式部分
3. Python正则表达式指南
虽然正则表达式功能异常强大, 很多实际上用的规则也非常巧妙, 真正熟练正则表达式需要比较长的实践锻炼. 不过我们只需要掌握如何使用正则表达式在一个字符串中, 把所有的url都找出来, 就可以了. 如果实在想要跳过这一部分, 可以在网上找到很多现成的匹配url的表达式, 拿来用即可.
正则表达式简介
Courtesy of AstralWind - cnblogs
正则表达式并不是Python的一部分。正则表达式是用于处理字符串的强大工具,拥有自己独特的语法以及一个独立的处理引擎,效率上可能不如str自带的方法,但功能十分强大。得益于这一点,在提供了正则表达式的语言里,正则表达式的语法都是一样的,区别只在于不同的编程语言实现支持的语法数量不同;但不用担心,不被支持的语法通常是不常用的部分。如果已经在其他语言里使用过正则表达式,只需要简单看一看就可以上手了。
正则表达式的大致匹配过程是:依次拿出表达式和文本中的字符比较,如果每一个字符都能匹配,则匹配成功;一旦有匹配不成功的字符则匹配失败。如果表达式中有量词或边界,这个过程会稍微有一些不同,但也是很好理解的,看下图中的示例以及自己多使用几次就能明白。
正则表达式通常用于在文本中查找匹配的字符串。Python里数量词默认是贪婪的(在少数语言里也可能是默认非贪婪),总是尝试匹配尽可能多的字符;非贪婪的则相反,总是尝试匹配尽可能少的字符。例如:正则表达式"ab*"如果用于查找"abbbc",将找到"abbb"。而如果使用非贪婪的数量词"ab*?",将找到"a"。
下图列出了Python支持的正则表达式元字符和语法:
开始使用re
Python通过re模块提供对正则表达式的支持。使用re的一般步骤是先将正则表达式的字符串形式编译为Pattern实例,然后使用Pattern实例处理文本并获得匹配结果(一个Match实例),最后使用Match实例获得信息,进行其他的操作。
Step3: re提供了众多模块方法用于完成正则表达式的功能。这些方法可以使用Pattern实例的相应方法替代,唯一的好处是少写一行re.compile()代码,但同时也无法复用编译后的Pattern对象。这些方法将在Pattern类的实例方法部分一起介绍。如上面这个例子可以简写为:
Step5: Compile
re.compile(strPattern[, flag])
Step6: re模块还提供了一个方法escape(string),用于将string中的正则表达式元字符如*/+/?等之前加上转义符再返回,在需要大量匹配元字符时有那么一点用。
Match
Match对象是一次匹配的结果,包含了很多关于此次匹配的信息,可以使用Match提供的可读属性或方法来获取这些信息。
属性
string
Step7: Pattern
Pattern对象是一个编译好的正则表达式,通过Pattern提供的一系列方法可以对文本进行匹配查找。
Pattern不能直接实例化,必须使用re.compile()进行构造。
Pattern提供了几个可读属性用于获取表达式的相关信息:
pattern
Step8: re模块方法
match(string[, pos[, endpos]]) | re.match(pattern, string[, flags])
这个方法将从string的pos下标处起尝试匹配pattern;如果pattern结束时仍可匹配,则返回一个Match对象;如果匹配过程中pattern无法匹配,或者匹配未结束就已到达endpos,则返回None。
pos和endpos的默认值分别为0和len(string);re.match()无法指定这两个参数,参数flags用于编译pattern时指定匹配模式。
注意:这个方法并不是完全匹配。当pattern结束时若string还有剩余字符,仍然视为成功。想要完全匹配,可以在表达式末尾加上边界匹配符'$'。
Step9: search(string[, pos[, endpos]]) | re.search(pattern, string[, flags])
这个方法用于查找字符串中可以匹配成功的子串。从string的pos下标处起尝试匹配pattern,如果pattern结束时仍可匹配,则返回一个Match对象;若无法匹配,则将pos加1后重新尝试匹配;直到pos=endpos时仍无法匹配则返回None。
pos和endpos的默认值分别为0和len(string));re.search()无法指定这两个参数,参数flags用于编译pattern时指定匹配模式。
Step10: split(string[, maxsplit]) | re.split(pattern, string[, maxsplit])
按照能够匹配的子串将string分割后返回列表。maxsplit用于指定最大分割次数,不指定将全部分割。
Step11: findall(string[, pos[, endpos]]) | re.findall(pattern, string[, flags])
搜索string,以列表形式返回全部能匹配的子串。
Step12: finditer(string[, pos[, endpos]]) | re.finditer(pattern, string[, flags])
搜索string,返回一个顺序访问每一个匹配结果(Match对象)的迭代器。
Step13: sub(repl, string[, count]) | re.sub(pattern, repl, string[, count])
使用repl替换string中每一个匹配的子串后返回替换后的字符串。
当repl是一个字符串时,可以使用\id或\g<id>、\g<name>引用分组,但不能使用编号0。
当repl是一个方法时,这个方法应当只接受一个参数(Match对象),并返回一个字符串用于替换(返回的字符串中不能再引用分组)。
count用于指定最多替换次数,不指定时全部替换。
Step14: subn(repl, string[, count]) |re.sub(pattern, repl, string[, count])
返回 (sub(repl, string[, count]), 替换次数)。
Step15: 4. Python网络爬虫Ver 1.0 alpha
有了以上铺垫, 终于可以开始写真正的爬虫了. 我选择的入口地址是Fenng叔的Startup News, 我想Fenng叔刚刚拿到7000万美金融资, 不会介意大家的爬虫去光临他家的小站吧. 这个爬虫虽然可以勉强运行起来, 但是由于缺乏异常处理, 只能爬些静态页面, 也不会分辨什么是静态什么是动态, 碰到什么情况应该跳过, 所以工作一会儿就要败下阵来. | Python Code:
from collections import deque
queue = deque(["Eric", "John", "Michael"])
queue.append("Terry") # Terry 入队
queue.append("Graham") # Graham 入队
queue.pop() # 队尾元素出队
queue.popleft() # 队首元素出队
queue # 队列中剩下的元素
Explanation: 用Python 3开发网络爬虫
By Terrill Yang (Github: https://github.com/yttty)
由你需要这些:Python3.x爬虫学习资料整理 - 知乎专栏整理而来。
本篇来自零基础自学用Python 3开发网络爬虫(二): 用到的数据结构简介以及爬虫Ver1.0 alpha
用Python 3开发网络爬虫 - Chapter 02
上一回, 我们学会了
用伪代码写出爬虫的主要框架;
用Python的 urllib.request 库抓取指定url的页面;
用Python的 urllib.parse 库对普通字符串转符合url的字符串.
这一回, 开始用Python将伪代码中的所有部分实现. 由于文章的标题就是"零基础", 因此会先把用到的两种数据结构队列和集合介绍一下. 而对于正则表达式部分, 会给出我比较喜欢的几个参考资料, 等到以后有时间再补充.
1. Python的队列
在爬虫程序中, 用到了广度优先搜索(BFS)算法. 这个算法用到的数据结构就是队列.
Python的List功能已经足够完成队列的功能, 可以用 append() 来向队尾添加元素, 可以用类似数组的方式来获取队首元素, 可以用 pop(0) 来弹出队首元素. 但是List用来完成队列功能其实是低效率的, 因为List在队首使用 pop(0) 和 append() 都是效率比较低的, Python官方建议使用collection.deque来高效的完成队列任务.
(以下例子引用自官方文档)
End of explanation
basket = {'apple', 'orange', 'apple', 'pear', 'orange', 'banana'}
print(basket) # 这里演示的是去重功能
print('orange in basket? ', 'orange' in basket) # 快速判断元素是否在集合内
print('crabgrass in basket? ', 'crabgrass' in basket)
# 下面展示两个集合间的运算.
a = set('abracadabra')
b = set('alacazam')
print(a)
print(b)
print(a & b) # 交集
print(a | b) # 并集
print(a - b) # 差集
print(a ^ b) # 对称差
Explanation: 2. Python的集合
在爬虫程序中, 为了不重复爬那些已经爬过的网站, 我们需要把爬过的页面的url放进集合中, 在每一次要爬某一个url之前, 先看看集合里面是否已经存在. 如果已经存在, 我们就跳过这个url; 如果不存在, 我们先把url放入集合中, 然后再去爬这个页面.
Python提供了set这种数据结构. set是一种无序的, 不包含重复元素的结构. 一般用来测试是否已经包含了某元素, 或者用来对众多元素们去重. 与数学中的集合论同样, 他支持的运算有交, 并, 差, 对称差.
创建一个set可以用 set() 函数或者花括号 {} . 但是创建一个空集是不能使用一个花括号的, 只能用 set() 函数. 因为一个空的花括号创建的是一个字典数据结构. 以下同样是Python官网提供的示例.
End of explanation
import re
# 将正则表达式编译成Pattern对象
pattern = re.compile(r'hello')
# 使用Pattern匹配文本,获得匹配结果,无法匹配时将返回None
match = pattern.match('hello world!')
if match:
# 使用Match获得分组信息
print(match.group())
Explanation: 在我们的爬虫中, 只是用到其中的快速判断元素是否在集合内的功能, 以及集合的并运算.
3. Python的正则表达式
在爬虫程序中, 爬回来的数据是一个字符串, 字符串的内容是页面的html代码. 我们要从字符串中, 提取出页面提到过的所有url. 这就要求爬虫程序要有简单的字符串处理能力, 而正则表达式可以很轻松的完成这一任务.
参考资料
1. 正则表达式30分钟入门教程(GitBook)
2. w3cschool 的Python正则表达式部分
3. Python正则表达式指南
虽然正则表达式功能异常强大, 很多实际上用的规则也非常巧妙, 真正熟练正则表达式需要比较长的实践锻炼. 不过我们只需要掌握如何使用正则表达式在一个字符串中, 把所有的url都找出来, 就可以了. 如果实在想要跳过这一部分, 可以在网上找到很多现成的匹配url的表达式, 拿来用即可.
正则表达式简介
Courtesy of AstralWind - cnblogs
正则表达式并不是Python的一部分。正则表达式是用于处理字符串的强大工具,拥有自己独特的语法以及一个独立的处理引擎,效率上可能不如str自带的方法,但功能十分强大。得益于这一点,在提供了正则表达式的语言里,正则表达式的语法都是一样的,区别只在于不同的编程语言实现支持的语法数量不同;但不用担心,不被支持的语法通常是不常用的部分。如果已经在其他语言里使用过正则表达式,只需要简单看一看就可以上手了。
正则表达式的大致匹配过程是:依次拿出表达式和文本中的字符比较,如果每一个字符都能匹配,则匹配成功;一旦有匹配不成功的字符则匹配失败。如果表达式中有量词或边界,这个过程会稍微有一些不同,但也是很好理解的,看下图中的示例以及自己多使用几次就能明白。
正则表达式通常用于在文本中查找匹配的字符串。Python里数量词默认是贪婪的(在少数语言里也可能是默认非贪婪),总是尝试匹配尽可能多的字符;非贪婪的则相反,总是尝试匹配尽可能少的字符。例如:正则表达式"ab*"如果用于查找"abbbc",将找到"abbb"。而如果使用非贪婪的数量词"ab*?",将找到"a"。
下图列出了Python支持的正则表达式元字符和语法:
开始使用re
Python通过re模块提供对正则表达式的支持。使用re的一般步骤是先将正则表达式的字符串形式编译为Pattern实例,然后使用Pattern实例处理文本并获得匹配结果(一个Match实例),最后使用Match实例获得信息,进行其他的操作。
End of explanation
m = re.match(r'hello', 'hello world!')
print(m.group())
Explanation: re提供了众多模块方法用于完成正则表达式的功能。这些方法可以使用Pattern实例的相应方法替代,唯一的好处是少写一行re.compile()代码,但同时也无法复用编译后的Pattern对象。这些方法将在Pattern类的实例方法部分一起介绍。如上面这个例子可以简写为:
End of explanation
a = re.compile(r\d + # the integral part
\. # the decimal point
\d * # some fractional digits, re.X)
b = re.compile(r"\d+\.\d*")
Explanation: Compile
re.compile(strPattern[, flag]):这个方法是Pattern类的工厂方法,用于将字符串形式的正则表达式编译为Pattern对象。 第二个参数flag是匹配模式,取值可以使用按位或运算符'|'表示同时生效,比如re.I | re.M。另外,你也可以在regex字符串中指定模式,比如re.compile('pattern', re.I | re.M)与re.compile('(?im)pattern')是等价的。
可选值有:
re.I(re.IGNORECASE): 忽略大小写(括号内是完整写法,下同)
re.M(MULTILINE): 多行模式,改变'^'和'$'的行为(参见上图)
re.S(DOTALL): 点任意匹配模式,改变'.'的行为
re.L(LOCALE): 使预定字符类 \w \W \b \B \s \S 取决于当前区域设定
re.U(UNICODE): 使预定字符类 \w \W \b \B \s \S \d \D 取决于unicode定义的字符属性
re.X(VERBOSE): 详细模式。这个模式下正则表达式可以是多行,忽略空白字符,并可以加入注释。以下两个正则表达式是等价的:
End of explanation
m = re.match(r'(\w+) (\w+)(?P<sign>.*)', 'hello world!')
print("m.string:", m.string)
print("m.re:", m.re)
print("m.pos:", m.pos)
print("m.endpos:", m.endpos)
print("m.lastindex:", m.lastindex)
print("m.lastgroup:", m.lastgroup)
print("m.group(1,2):", m.group(1, 2))
print("m.groups():", m.groups())
print("m.groupdict():", m.groupdict())
print("m.start(2):", m.start(2))
print("m.end(2):", m.end(2))
print("m.span(2):", m.span(2))
print(r"m.expand(r'\2 \1\3'):", m.expand(r'\2 \1\3'))
Explanation: re模块还提供了一个方法escape(string),用于将string中的正则表达式元字符如*/+/?等之前加上转义符再返回,在需要大量匹配元字符时有那么一点用。
Match
Match对象是一次匹配的结果,包含了很多关于此次匹配的信息,可以使用Match提供的可读属性或方法来获取这些信息。
属性
string: 匹配时使用的文本。
re: 匹配时使用的Pattern对象。
pos: 文本中正则表达式开始搜索的索引。值与Pattern.match()和Pattern.seach()方法的同名参数相同。
endpos: 文本中正则表达式结束搜索的索引。值与Pattern.match()和Pattern.seach()方法的同名参数相同。
lastindex: 最后一个被捕获的分组在文本中的索引。如果没有被捕获的分组,将为None。
lastgroup: 最后一个被捕获的分组的别名。如果这个分组没有别名或者没有被捕获的分组,将为None。
方法
group([group1, …]): 获得一个或多个分组截获的字符串;指定多个参数时将以元组形式返回。group1可以使用编号也可以使用别名;编号0代表整个匹配的子串;不填写参数时,返回group(0);没有截获字符串的组返回None;截获了多次的组返回最后一次截获的子串。
groups([default]): 以元组形式返回全部分组截获的字符串。相当于调用group(1,2,…last)。default表示没有截获字符串的组以这个值替代,默认为None。
groupdict([default]): 返回以有别名的组的别名为键、以该组截获的子串为值的字典,没有别名的组不包含在内。default含义同上。
start([group]): 返回指定的组截获的子串在string中的起始索引(子串第一个字符的索引)。group默认值为0。
end([group]): 返回指定的组截获的子串在string中的结束索引(子串最后一个字符的索引+1)。group默认值为0。
span([group]): 返回(start(group), end(group))。
expand(template):将匹配到的分组代入template中然后返回。template中可以使用\id或\g<id>、\g<name>引用分组,但不能使用编号0。\id与\g<id>是等价的;但\10将被认为是第10个分组,如果你想表达\1之后是字符'0',只能使用\g<1>0。
End of explanation
p = re.compile(r'(\w+) (\w+)(?P<sign>.*)', re.DOTALL)
print("p.pattern:", p.pattern)
print("p.flags:", p.flags)
print("p.groups:", p.groups)
print("p.groupindex:", p.groupindex)
Explanation: Pattern
Pattern对象是一个编译好的正则表达式,通过Pattern提供的一系列方法可以对文本进行匹配查找。
Pattern不能直接实例化,必须使用re.compile()进行构造。
Pattern提供了几个可读属性用于获取表达式的相关信息:
pattern: 编译时用的表达式字符串。
flags: 编译时用的匹配模式。数字形式。
groups: 表达式中分组的数量。
groupindex: 以表达式中有别名的组的别名为键、以该组对应的编号为值的字典,没有别名的组不包含在内。
End of explanation
pattern = re.compile(r'hello')
# 使用Pattern匹配文本,获得匹配结果,无法匹配时将返回None
match = pattern.match('hello world!')
if match:
print(match.group())
Explanation: re模块方法
match(string[, pos[, endpos]]) | re.match(pattern, string[, flags])
这个方法将从string的pos下标处起尝试匹配pattern;如果pattern结束时仍可匹配,则返回一个Match对象;如果匹配过程中pattern无法匹配,或者匹配未结束就已到达endpos,则返回None。
pos和endpos的默认值分别为0和len(string);re.match()无法指定这两个参数,参数flags用于编译pattern时指定匹配模式。
注意:这个方法并不是完全匹配。当pattern结束时若string还有剩余字符,仍然视为成功。想要完全匹配,可以在表达式末尾加上边界匹配符'$'。
End of explanation
p = re.compile(r'world')
# 使用search()查找匹配的子串,不存在能匹配的子串时将返回None
# 这个例子中使用match()无法成功匹配
match1 = p.search('hello world!')
match2 = p.match('hello world!')
if match1:
print('pattern.search result: ', match1.group())
if match2:
print('pattern.match result: ', match2.group())
Explanation: search(string[, pos[, endpos]]) | re.search(pattern, string[, flags])
这个方法用于查找字符串中可以匹配成功的子串。从string的pos下标处起尝试匹配pattern,如果pattern结束时仍可匹配,则返回一个Match对象;若无法匹配,则将pos加1后重新尝试匹配;直到pos=endpos时仍无法匹配则返回None。
pos和endpos的默认值分别为0和len(string));re.search()无法指定这两个参数,参数flags用于编译pattern时指定匹配模式。
End of explanation
p = re.compile(r'\d+')
print(p.split('one1two2three3four4'))
Explanation: split(string[, maxsplit]) | re.split(pattern, string[, maxsplit])
按照能够匹配的子串将string分割后返回列表。maxsplit用于指定最大分割次数,不指定将全部分割。
End of explanation
p = re.compile(r'\d+')
print(p.findall('one1two2three3four4'))
Explanation: findall(string[, pos[, endpos]]) | re.findall(pattern, string[, flags])
搜索string,以列表形式返回全部能匹配的子串。
End of explanation
p = re.compile(r'\d+')
for m in p.finditer('one1two2three3four4'):
print(m.group())
Explanation: finditer(string[, pos[, endpos]]) | re.finditer(pattern, string[, flags])
搜索string,返回一个顺序访问每一个匹配结果(Match对象)的迭代器。
End of explanation
p = re.compile(r'(\w+) (\w+)')
s = 'i say, hello world!'
print(p.sub(r'\2 \1', s))
def func(m):
return m.group(1).title() + ' ' + m.group(2).title()
print(p.sub(func, s))
Explanation: sub(repl, string[, count]) | re.sub(pattern, repl, string[, count])
使用repl替换string中每一个匹配的子串后返回替换后的字符串。
当repl是一个字符串时,可以使用\id或\g<id>、\g<name>引用分组,但不能使用编号0。
当repl是一个方法时,这个方法应当只接受一个参数(Match对象),并返回一个字符串用于替换(返回的字符串中不能再引用分组)。
count用于指定最多替换次数,不指定时全部替换。
End of explanation
p = re.compile(r'(\w+) (\w+)')
s = 'i say, hello world!'
print(p.subn(r'\2 \1', s))
def func(m):
return m.group(1).title() + ' ' + m.group(2).title()
print(p.subn(func, s))
Explanation: subn(repl, string[, count]) |re.sub(pattern, repl, string[, count])
返回 (sub(repl, string[, count]), 替换次数)。
End of explanation
import re
import urllib.request
import urllib
from collections import deque
queue = deque()
visited = set()
url = 'http://news.dbanotes.net' # 入口页面, 可以换成别的
queue.append(url)
cnt = 0
while queue:
url = queue.popleft() # 队首元素出队
visited |= {url} # 标记为已访问
print('已经抓取: ' + str(cnt) + ' 正在抓取 <--- ' + url)
cnt += 1
urlop = urllib.request.urlopen(url)
if 'html' not in urlop.getheader('Content-Type'):
continue
# 避免程序异常中止, 用try..catch处理异常
try:
data = urlop.read().decode('utf-8')
except:
continue
# 正则表达式提取页面中所有队列, 并判断是否已经访问过, 然后加入待爬队列
linkre = re.compile('href="(.+?)"')
for x in linkre.findall(data):
if 'http' in x and x not in visited:
queue.append(x)
print('加入队列 ---> ' + x)
Explanation: 4. Python网络爬虫Ver 1.0 alpha
有了以上铺垫, 终于可以开始写真正的爬虫了. 我选择的入口地址是Fenng叔的Startup News, 我想Fenng叔刚刚拿到7000万美金融资, 不会介意大家的爬虫去光临他家的小站吧. 这个爬虫虽然可以勉强运行起来, 但是由于缺乏异常处理, 只能爬些静态页面, 也不会分辨什么是静态什么是动态, 碰到什么情况应该跳过, 所以工作一会儿就要败下阵来.
End of explanation |
15,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 1
Step1: OK, the D3 area is set up
Now we'll focus on live updating. A manual test first.
Step2: Step 2 | Python Code:
from IPython.core.display import display, HTML
from string import Template
import pandas as pd
import json, random
HTML('<script src="lib/d3/d3.min.js"></script>')
html_template = Template('''
<svg id="graph-div"></div>
<script> $js_text </script>
''')
js_text_template = Template('''
var data = $data;
var barHeight = 30;
var barWidth = 30;
var width = 400;
var x = d3.scale.linear()
.domain([0, 4096])
.range(["black", "white"]);
var chart = d3.select('#graph-div')
.attr('width', width)
.attr('height', 400);
function update() {
var row = chart.selectAll('.item')
.data(data)
.enter()
.append('g')
.attr('class', 'item')
.attr('transform', function(d, i){
console.log(d);
console.log(i);
return 'translate(' + barWidth * (i%10) + ', ' + barHeight * Math.floor(i/10) + ')';
})
.append('rect')
.attr('width', barWidth)
.attr('height', barHeight)
.attr('fill', function(d) {
return x(d);
});
}
update();
''')
data = [];
js_text = js_text_template.substitute({'data': json.dumps(data)})
HTML(html_template.substitute({'js_text': js_text}))
Explanation: Step 1: Set up D3 area
End of explanation
js_text_template_2 = Template('''
data = $data;
update();
console.log("updating");
''')
data = [] #[0,500,1000,2000,4000]
def update_graph(data):
js_text = js_text_template_2.substitute({'data': json.dumps(data)})
display(HTML('<script>' + js_text + '</script>'))
update_graph(data)
Explanation: OK, the D3 area is set up
Now we'll focus on live updating. A manual test first.
End of explanation
import paho.mqtt.client as mqtt
data = []
tests = [[]]
update_graph(data)
def on_connect(client, userdata, flags, rc):
print("Connected with result code "+str(rc))
client.subscribe("/outTopic")
def on_message(client, userdata, msg):
global data
try:
msg_json = json.loads(msg.payload)
except:
print "Error"
print(msg.topic+" "+str(msg.payload))
return
# if msg_json['type'] == "BREAK" and msg_json['label'] == "LOOP": # and msg_json['value'] == "END":
# tests.append(data)
# data = []
# update_graph(data)
if msg_json['type'] == "BINARY" and msg_json['label'] == "A0":
data.append(int(msg_json['value']))
# tests[-1] = data
update_graph(data)
print int(msg_json['value']), data
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.username_pw_set("zettlmtm", "VOUbRcmhjffA")
client.connect("m11.cloudmqtt.com", 19280, 60)
client.loop_start()
# Make sure to call client.loop_stop() later
client.loop_stop()
print data
Explanation: Step 2: Now use MQTT to update the graph
Now for the fun stuff. Using the update_graph(data) function set up above. Each group of data, separated by "start" messages, is saved in the tests array. As new data comes in, the live data graph is updated to show data from the last test. All data is neatly saved inside the tests array for replotting and further analysis later.
Using the CloudMQTT free Cat plan:
* https://api.cloudmqtt.com/sso/cloudmqtt/console
* haipjacob@gmail.com:paramour-sieve-paper
* If the Websocket UI has trouble connecting, restart the instance
End of explanation |
15,354 | Given the following text description, write Python code to implement the functionality described.
Description:
Program for finding the Integral of a given function using Boole 's Rule
Function to return the value of f ( x ) for the given value of x ; Function to computes the integrand of y at the given intervals of x with step size h and the initial limit a and final limit b ; Number of intervals ; Computing the step size ; Substituing a = 0 , b = 4 and h = 1 ; Driver code
| Python Code:
def y(x ) :
return(1 /(1 + x ) )
def BooleRule(a , b ) :
n = 4
h =(( b - a ) / n )
sum = 0
bl =(7 * y(a ) + 32 * y(a + h ) + 12 * y(a + 2 * h ) + 32 * y(a + 3 * h ) + 7 * y(a + 4 * h ) ) * 2 * h / 45
sum = sum + bl
return sum
if __name__== ' __main __' :
lowlimit = 0
upplimit = 4
print("f(x ) ▁ = ", round(BooleRule(0 , 4 ) , 4 ) )
|
15,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Workcamp Maschinelles Lernen</h1>
<h2>Grundlagen - Arbeiten mit Panda Dataframes</h2>
<h3>EInlesen von Dateien in Dataframes</h3>
Lassen Sie uns jetzt unsere Kenntnisse erweitern. Wir wollen Files in Dataframes einlesen.
Einlesen von csv-Dateien
Daten sind die sunset observationen von SILSO
Die Daten gehen bis in das 18. Jahrhundert zurück, mit über 70.000 Zeilen
http
Step1: Dataframes kann man als csv und excel file im aktuellen Directory speichern mit
Step2: Wir können jetzt einige Probleme erkennen. Die Spaltenüberschriften machten einfach keinen Sinn.<br>
Man mussten mit header=None arbeiten.Und hier gibt es überraschend in Spalte 5 EInträge der Art<br>
-1.0.
Die -1.0 Einträge bedeuten, das kein Wert vorhanden ist.
Die Datei enthält folgende Werte
Step3: Wenn wir wissen dass die -1 Werte nicht vorhandene Werte repräsentiert,
dann können wir das bereits beim Einlesen der Daten berücksichtigen.
Step4: <h2>Aus panda datframes direkt plotten</h2>
Step5: <h3>Einlesen historischer Aktienwerte von Apple. Quelle yahoo finance</h3>
Step6: <h3>Einlesen historischer Aktienwerte von Amazon. Quelle yahoo finance</h3>
Step7: <h3>Einlesen einer csv Datei
Step8: <h3>Einlesen einer csv-Datei
Step9: <h3>Einlesen hstorischer Aktienwerte S&P500 Quelle yahoo finance</h3> | Python Code:
import pandas as pd
dateipfad = 'SN_d_tot_V2.0.csv'
sunsets = pd.read_csv(dateipfad, sep=';', header=None)
sunsets.info()
sunsets.head(10)
Explanation: <h1>Workcamp Maschinelles Lernen</h1>
<h2>Grundlagen - Arbeiten mit Panda Dataframes</h2>
<h3>EInlesen von Dateien in Dataframes</h3>
Lassen Sie uns jetzt unsere Kenntnisse erweitern. Wir wollen Files in Dataframes einlesen.
Einlesen von csv-Dateien
Daten sind die sunset observationen von SILSO
Die Daten gehen bis in das 18. Jahrhundert zurück, mit über 70.000 Zeilen
http://www.sidc.be/silso/datafiles
Data Credits: The Sunspot Number data can be freely downloaded. However, we request that proper credit to the WDC-SILSO is explicitely included in any publication using our data (paper article or book, on-line Web content, etc.),
Dat-Name: SN_d_tot_V2.0.csv
<h2>Aufgabe: Lesen Sie die Daten mit read_csv() in die Variable sunsets als Dataframe ein<br>
und betrachten Sie den Dataframe mit .info()</h2>
<h3>Laden Sie die pandas Bibliothek (Welchen alias verwenden Sie ?)</h3>
<h3>Lösung:</h3>
End of explanation
sunsets.iloc[10:200, :]
Explanation: Dataframes kann man als csv und excel file im aktuellen Directory speichern mit:
out_csv = 'example.csv'
df.to_csv('out_csv', sep=(';'))
out_xlsx = 'example.xlsx')
df.to_excel('out_xlsx')
<h3>Geben Sie die Datenreihen 10:200 aus</h3>
End of explanation
spalten_namen=['Jahr','Monat','Dez_Tag', 'Sunspot', 'Stabw','Definitv','Observationen']
sun_spots=pd.read_csv(dateipfad, header=None, sep =';', names=spalten_namen)
sun_spots.head(10)
sun_spots.info()
Explanation: Wir können jetzt einige Probleme erkennen. Die Spaltenüberschriften machten einfach keinen Sinn.<br>
Man mussten mit header=None arbeiten.Und hier gibt es überraschend in Spalte 5 EInträge der Art<br>
-1.0.
Die -1.0 Einträge bedeuten, das kein Wert vorhanden ist.
Die Datei enthält folgende Werte:<br>
Column 1-3: Gregorian calendar date
- Year
- Month
- Day
Column 4: Date in fraction of year
Column 5: Daily total sunspot number. A value of -1 indicates that no number is available for that day (missing value).
Column 6: Daily standard deviation of the input sunspot numbers from individual stations.
Column 7: Number of observations used to compute the daily value.
Column 8: Definitive/provisional indicator. A blank indicates that the value is definitive. A '*' symbol indicates that the value is still provisional and is subject to a possible revision (Usually the last 3 to 6 months)
Wir wollen in der datei die Dpaltennamen ändern
End of explanation
sunny_spots=pd.read_csv(dateipfad, header=None, sep =';', names=spalten_namen, na_values=' -1')
sunny_spots.head(10)
sunny_spots.info()
sunny_spots.head(10)
Explanation: Wenn wir wissen dass die -1 Werte nicht vorhandene Werte repräsentiert,
dann können wir das bereits beim Einlesen der Daten berücksichtigen.
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <h2>Aus panda datframes direkt plotten</h2>
End of explanation
aapl = pd.read_csv('AAPL.csv')
aapl.head(5)
aapl = pd.read_csv('AAPL.csv',sep=',', index_col='Date')
aapl.head(5)
aapl = pd.read_csv('AAPL.csv', sep=',', index_col='Date', parse_dates=True)
aapl.head(5)
close_arr=aapl['Close'].values
print(close_arr)
aapl.info()
type(close_arr)
plt.plot(close_arr)
plt.show()
Explanation: <h3>Einlesen historischer Aktienwerte von Apple. Quelle yahoo finance</h3>
End of explanation
amzn = pd.read_csv('AMZN.csv', sep=',', index_col='Date', parse_dates=True)
close_arr=amzn['Close'].values
plt.plot(close_arr)
plt.show()
close_series = amzn['Close']
plt.plot(close_series)
plt.show()
#Serien direkt plotten
close_series.plot()
plt.show()
amzn.plot()
plt.show()
plt.plot(amzn)
plt.show()
amzn.plot()
#Logarithmische Skala auf der y-Achse
plt.yscale('log')
plt.show
amzn['Open'].plot(color='b',legend=True)
amzn['Close'].plot(color='r',legend=True)
#plt.axis('2014','2018',0,100)
plt.show()
amzn.info()
amzn.loc['2016':'2018',['Open']].plot()
plt.show()
amzn.loc['2015':'2018',['Close']].plot()
plt.savefig('amzn.png')
plt.savefig('amzn.jpg')
plt.savefig('amzn.pdf')
plt.show()
Explanation: <h3>Einlesen historischer Aktienwerte von Amazon. Quelle yahoo finance</h3>
End of explanation
temp=pd.read_csv('temperaturen-1300.csv',sep=(';'),decimal=(','))
temp.head()
print(temp)
plt.plot(temp)
plt.show()
# Create a plot with color='red'
temp.plot(color='r')
# Add a title
plt.title('Temperatur')
# Specify the x-axis label
plt.xlabel('Stunden seit Mitternacht 1. August 2015')
# Specify the y-axis label
plt.ylabel('Temperatur (Grad C)')
# Display the plot
plt.show()
Explanation: <h3>Einlesen einer csv Datei: temperaturen-1300.csv sep=; decimal = ,</h3>
End of explanation
wett=pd.read_csv('wetterdaten-1300.csv',header=0, sep=(';'),decimal=(','))
wett.head()
wett.plot()
plt.show()
wett.plot(subplots=True)
plt.show()
# Plot all columns (default)
wett.plot()
plt.show()
# Plot all columns as subplots
wett.plot(subplots=True)
plt.show()
#
col_list1=['Temperatur']
wett[col_list1].plot()
plt.show()
col_list2=['Luftdruck ']
wett[col_list2].plot()
plt.show()
wett.info()
wett.columns
Explanation: <h3>Einlesen einer csv-Datei: wetterdaten-1300.csv header=0 sep=; decimal = ,</h3>
End of explanation
gspc = pd.read_csv('GSPC-2000-2018.csv', sep=',', index_col='Date', parse_dates=True)
close_series = gspc['Close']
#Serien direkt plotten
close_series.plot(title='S&P 500')
plt.ylabel('Closing in US$')
plt.show()
gspc.info()
gspc.columns
gspc['Close'].mean()
gspc['Close'].std()
gspc.head()
gspc.tail()
gspc.loc['2015-01-01':'2016-01-01', 'Close'].plot(title='S&P 500')
plt.ylabel('Closing in US$')
plt.show()
gspc.loc['2015-01-01':'2015-02-01', 'Close'].plot(title='S&P 500', style='k.-')
plt.ylabel('Closing in US$')
plt.show()
gspc.loc['2015-01-01':'2015-02-01', 'Close'].plot(title='S&P 500', style='r.-')
plt.ylabel('Closing in US$')
plt.show()
gspc.loc['2015-01-01':'2015-02-01', 'Close'].plot(title='S&P 500', kind='box')
plt.ylabel('Closing in US$')
plt.show()
gspc.loc['2015-01-01':'2016-02-01', 'Close'].plot(title='S&P 500', kind='hist', bins=30)
plt.ylabel('Closing in US$')
plt.show()
gspc.loc['2015-01-01':'2016-02-01', 'Close'].plot(title='S&P 500', kind='box')
plt.ylabel('Closing in US$')
plt.savefig('gspc2015.png')
plt.show()
Explanation: <h3>Einlesen hstorischer Aktienwerte S&P500 Quelle yahoo finance</h3>
End of explanation |
15,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Trying out the copulalib python package.</h1>
<p><h3>Frank, Clayton and Gumbel copulas from 2d data.</h3></p>
<h3>Pre-setup</h3>
<p>The package is in pip, so you can conveniently just "pip copulalib". It also comes in Anaconda.</p>
<p>You may need to install a package called "statistics".</p>
<p>In order to make it work, a small fix is needed in copulalib.py, or you will get the following problem when attempting to build your copula
Step1: <p>Note
Step2: <h3>Independent (unrelated) data</h3>
<p>We will draw samples from a beta distribution for (x), and samples from a lognormal for (y). The samples are pseudo-independent (we know there is no real independence if you use a computer to draw the samples, but well, reasonably independent).</p>
Step3: <h3>Dependent (related) data</h3>
<p>The independent variable will be a lognormal (y) and variable (x) depends on (y) with the following relation | Python Code:
#The first assert makes sure that you are getting a 1D in X and Y
#replace this code around line 58 in site-packages/copulalib/copulalib.py
try:
if X.shape[0] != Y.shape[0]:
raise ValueError('The size of both arrays should be same.')
except:
raise TypeError('X and Y should have a callable shape method. '
'Try converting them to numpy arrays')
Explanation: <h1>Trying out the copulalib python package.</h1>
<p><h3>Frank, Clayton and Gumbel copulas from 2d data.</h3></p>
<h3>Pre-setup</h3>
<p>The package is in pip, so you can conveniently just "pip copulalib". It also comes in Anaconda.</p>
<p>You may need to install a package called "statistics".</p>
<p>In order to make it work, a small fix is needed in copulalib.py, or you will get the following problem when attempting to build your copula:</p>
<img src="https://raw.githubusercontent.com/lia-statsletters/notebooks/master/img/fix_this_in_copulalib.png">
<p>The fix is simple: replace "if X.size is not Y.size" with something like:</p>
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from copulalib.copulalib import Copula as copula
from scipy.stats import beta, lognorm, gaussian_kde
plt.style.use('ggplot')
def ppf(var,q):
#the equivalent of ppf but built directly from data
sortedvar=np.sort(var)
rcdf=np.linspace(0,1, num=len(var))
returnable=np.array([sortedvar[np.where(rcdf<=qx)[0][-1]] for qx in q])
#print returnable.shape
return returnable
def generateAndPlot(title,x,y,howmany):
#generate and plot
fig, axs = plt.subplots(3,2,figsize=(12., 18.))
fig.suptitle(title)
for index,family in enumerate(['frank','clayton','gumbel']):
copula_f = copula(x,y,family=family)
try:
#get pseudo observations
u,v = copula_f.generate_uv(howmany)
except Exception as exx:
print "Could not extract pseudo observations for {}\
because {}".format(family, exx.message)
continue
#plot pseudo observations
axs[index][0].scatter(u,v,marker='o',alpha=0.7)
axs[index][0].set_title('pseudo observations {}'.format(family),
fontsize=12)
#plot inverse ppf
#ux = beta.ppf(u,a,b,loc=loc)
#vy= lognorm.ppf(v,sc,loc=loc)
ux=ppf(x,u)
vy=ppf(y,v)
axs[index][1].scatter(x,y,marker='*',color='r', alpha=0.5) #original samples
axs[index][1].scatter(ux,vy,marker='s', alpha=0.3) #simulated realisations
axs[index][1].set_title('simulated (blue) vs original (red) {}'.format(family),
fontsize=12)
plt.show()
#total samples vs pseudo observations
pseudoobs=100
sz=300
loc=0.0 #needed for most distributions
#lognormal param
sc=0.5
y=lognorm.rvs(sc,loc=loc, size=sz)
Explanation: <p>Note: at that point I was lazily pasting the structure for some plotting ;).</p>
<h3>Testing the package</h3>
<p>The first samples (x) are generated from a beta distribution and (y) from a lognormal. Beta distributions have finite bounds for support, while the right-side support for lognormals is in the infinity. An interesting property ofcopulas: BOTH marginals are transformed to the unit range. Isn't that cool?</p>
<p>To try the package, we fitted copulas of three archimedean families (Frank, Clayton, Gumbel) to samples x and y. We then extracted some samples from the fitted copulas, and plotted the sampled outputs together with the original samples, in order to look at how they compare with each other. </p>
<p>feel free to try it out on you own, and do comment if you feel like!</p>
End of explanation
#unrelated data: a beta (x) and a lognormal (y)
a= 0.45#2. #alpha
b=0.25#5. #beta
x=beta.rvs(a,b,loc=loc,size=sz)
#plot unrelated x and y
t = np.linspace(0, lognorm.ppf(0.99, sc), sz)
plt.plot(t, beta.pdf(t,a,b), lw=5, alpha=0.6, label='x:beta')
plt.plot(t, lognorm.pdf(t, sc), lw=5, alpha=0.6, label='y:lognormal')
plt.legend()
#plot copulas built from unrelated x and y
title='Copulas from unrelated data x: beta, alpha {} beta {}, y: lognormal, mu {}, sigma {}'.format(a,b,loc,sc)
generateAndPlot(title,x,y,pseudoobs)
Explanation: <h3>Independent (unrelated) data</h3>
<p>We will draw samples from a beta distribution for (x), and samples from a lognormal for (y). The samples are pseudo-independent (we know there is no real independence if you use a computer to draw the samples, but well, reasonably independent).</p>
End of explanation
#related data: a lognormal (y)
#plot related data
title='Copulas from dependent data xx depending on y, y: lognormal, mu {}, sigma {}'.format(loc,sc)
xx=[]
jumps=[0.001,0.01,0.1]
for indx,yi in enumerate(y):
if indx>1:
if yi>y[indx-1]:
xx.append(xx[-1]+np.random.choice(jumps))
else:
xx.append(xx[-1]-np.random.choice(jumps))
else:
xx.append(1)
xx=np.array(xx)
t = np.linspace(0, lognorm.ppf(0.99, sc), sz)
#plt.plot(t, gkxx.pdf(t), lw=5, alpha=0.6, label='x:f(y)')
plt.hist(xx,normed=True,label='x:f(y)',alpha=0.6,bins=100)
plt.plot(t, lognorm.pdf(t, sc), lw=5, alpha=0.6, label='y:lognormal')
plt.legend()
generateAndPlot(title,xx,y,pseudoobs)
plt.plot(c, lognorm.pdf(t, sc), lw=5, alpha=0.6, label='y:lognormal')
Explanation: <h3>Dependent (related) data</h3>
<p>The independent variable will be a lognormal (y) and variable (x) depends on (y) with the following relation: The initial value is 1 (independently). Then, for each point i, if $y_i > y_{i-1}$, then $x_i=x_{i-1}+c$, in which c is chosen uniformly from a list of fractions of 1. Otherwise, $x_i=x_{i-1}-c$.</p>
End of explanation |
15,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Sentences
Let's take some sentences from Wikipedia to run through model
Step3: Run the model
We'll load the BERT model from TF-Hub, tokenize our sentences using the matching preprocessing model from TF-Hub, then feed in the tokenized sentences to the model. To keep this colab fast and simple, we recommend running on GPU.
Go to Runtime → Change runtime type to make sure that GPU is selected
Step5: Semantic similarity
Now let's take a look at the pooled_output embeddings of our sentences and compare how similar they are across sentences. | Python Code:
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install --quiet "tensorflow-text==2.8.*"
import seaborn as sns
from sklearn.metrics import pairwise
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text # Imports TF ops for preprocessing.
#@title Configure the model { run: "auto" }
BERT_MODEL = "https://tfhub.dev/google/experts/bert/wiki_books/2" # @param {type: "string"} ["https://tfhub.dev/google/experts/bert/wiki_books/2", "https://tfhub.dev/google/experts/bert/wiki_books/mnli/2", "https://tfhub.dev/google/experts/bert/wiki_books/qnli/2", "https://tfhub.dev/google/experts/bert/wiki_books/qqp/2", "https://tfhub.dev/google/experts/bert/wiki_books/squad2/2", "https://tfhub.dev/google/experts/bert/wiki_books/sst2/2", "https://tfhub.dev/google/experts/bert/pubmed/2", "https://tfhub.dev/google/experts/bert/pubmed/squad2/2"]
# Preprocessing must match the model, but all the above use the same.
PREPROCESS_MODEL = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3"
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/bert_experts"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/bert_experts.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/bert_experts.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/bert_experts.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/s?q=experts%2Fbert"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
BERT Experts from TF-Hub
This colab demonstrates how to:
* Load BERT models from TensorFlow Hub that have been trained on different tasks including MNLI, SQuAD, and PubMed
* Use a matching preprocessing model to tokenize raw text and convert it to ids
* Generate the pooled and sequence output from the token input ids using the loaded model
* Look at the semantic similarity of the pooled outputs of different sentences
Note: This colab should be run with a GPU runtime
Set up and imports
End of explanation
sentences = [
"Here We Go Then, You And I is a 1999 album by Norwegian pop artist Morten Abel. It was Abel's second CD as a solo artist.",
"The album went straight to number one on the Norwegian album chart, and sold to double platinum.",
"Among the singles released from the album were the songs \"Be My Lover\" and \"Hard To Stay Awake\".",
"Riccardo Zegna is an Italian jazz musician.",
"Rajko Maksimović is a composer, writer, and music pedagogue.",
"One of the most significant Serbian composers of our time, Maksimović has been and remains active in creating works for different ensembles.",
"Ceylon spinach is a common name for several plants and may refer to: Basella alba Talinum fruticosum",
"A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth.",
"A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.",
]
Explanation: Sentences
Let's take some sentences from Wikipedia to run through model
End of explanation
preprocess = hub.load(PREPROCESS_MODEL)
bert = hub.load(BERT_MODEL)
inputs = preprocess(sentences)
outputs = bert(inputs)
print("Sentences:")
print(sentences)
print("\nBERT inputs:")
print(inputs)
print("\nPooled embeddings:")
print(outputs["pooled_output"])
print("\nPer token embeddings:")
print(outputs["sequence_output"])
Explanation: Run the model
We'll load the BERT model from TF-Hub, tokenize our sentences using the matching preprocessing model from TF-Hub, then feed in the tokenized sentences to the model. To keep this colab fast and simple, we recommend running on GPU.
Go to Runtime → Change runtime type to make sure that GPU is selected
End of explanation
#@title Helper functions
def plot_similarity(features, labels):
Plot a similarity matrix of the embeddings.
cos_sim = pairwise.cosine_similarity(features)
sns.set(font_scale=1.2)
cbar_kws=dict(use_gridspec=False, location="left")
g = sns.heatmap(
cos_sim, xticklabels=labels, yticklabels=labels,
vmin=0, vmax=1, cmap="Blues", cbar_kws=cbar_kws)
g.tick_params(labelright=True, labelleft=False)
g.set_yticklabels(labels, rotation=0)
g.set_title("Semantic Textual Similarity")
plot_similarity(outputs["pooled_output"], sentences)
Explanation: Semantic similarity
Now let's take a look at the pooled_output embeddings of our sentences and compare how similar they are across sentences.
End of explanation |
15,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: We want to create a network that has only one LSTM cell. The LSTM cell has 2 hidden nodes, so we need 2 state vector as well. Here, state is a tuple with 2 elements, each one is of size [1 x 2], one for passing information to next time step, and another for passing the state to next layer/output.
Step2: As we can see, the states are initalized with zeros
Step3: Lets look the output and state of the network
Step4: Stacked LSTM basecs
What about if we want to have a RNN?
The input should be a Tensor of shape | Python Code:
import numpy as np
import tensorflow as tf
tf.reset_default_graph()
sess = tf.InteractiveSession()
Explanation: <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/jvcqp2iy2jlx2b32rmzdt0tx8lvxgzkp.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>RECURRENT NETWORKS IN DEEP LEARNING</font></h1>
The Long Short-Term Memory Model
Hello and welcome to this notebook. In this notebook, we will go over concepts of the Long Short-Term Memory (LSTM) model, a refinement of the original Recurrent Neural Network model. By the end of this notebook, you should be able to understand the Long Short-Term Memory model, the benefits and problems it solves, and its inner workings and calculations.
The Problem to be Solved
Long Short-Term Memory, or LSTM for short, is one of the proposed solutions or upgrades to the Recurrent Neural Network model. The Recurrent Neural Network is a specialized type of Neural Network that solves the issue of maintaining context for Sequential data -- such as Weather data, Stocks, Genes, etc. At each iterative step, the processing unit takes in an input and the current state of the network, and produces an output and a new state that is re-fed into the network.
<img src=https://ibm.box.com/shared/static/v7p90neiaqghmpwawpiecmz9n7080m59.png width="720"/>
<center>Representation of a Recurrent Neural Network</center>
However, this model has some problems. It's very computationally expensive to maintain the state for a large amount of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. As such, they are prone to different problems with their Gradient Descent optimizer -- they either grow exponentially (Exploding Gradient) or drop down to near zero and stabilize (Vanishing Gradient), both problems that greatly harm a model's learning capability.
Long Short-Term Memory: What is it?
To solve these problems, Hochreiter and Schmidhuber published a paper in 1997 describing a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable.
The Long Short-Term Memory, as it was called, was an abstraction of how computer memory works. It is "bundled" with whatever processing unit is implemented in the Recurrent Network, although outside of its flow, and is responsible for keeping, reading, and outputting information for the model. The way it works is simple: you have a linear unit, which is the information cell itself, surrounded by three logistic gates responsible for maintaining the data. One gate is for inputting data into the information cell, one is for outputting data from the input cell, and the last one is to keep or forget data depending on the needs of the network.
Thanks to that, it not only solves the problem of keeping states, because the network can choose to forget data whenever information is not needed, it also solves the gradient problems, since the Logistic Gates have a very nice derivative.
Long Short-Term Memory Architecture
As seen before, the Long Short-Term Memory is composed of a linear unit surrounded by three logistic gates. The name for these gates vary from place to place, but the most usual names for them are the "Input" or "Write" Gate, which handles the writing of data into the information cell, the "Output" or "Read" Gate, which handles the sending of data back onto the Recurrent Network, and the "Keep" or "Forget" Gate, which handles the maintaining and modification of the data stored in the information cell.
<img src=https://ibm.box.com/shared/static/zx10duv5egw0baw6gh2hzsgr8ex45gsg.png width="720"/>
<center>Diagram of the Long Short-Term Memory Unit</center>
The three gates are the centerpiece of the LSTM unit. The gates, when activated by the network, perform their respective functions. For example, the Input Gate will write whatever data it is passed onto the information cell, the Output Gate will return whatever data is in the information cell, and the Keep Gate will maintain the data in the information cell. These gates are analog and multiplicative, and as such, can modify the data based on the signal they are sent.
For example, an usual flow of operations for the LSTM unit is as such: First off, the Keep Gate has to decide whether to keep or forget the data currently stored in memory. It receives both the input and the state of the Recurrent Network, and passes it through its Sigmoid activation. A value of 1 means that the LSTM unit should keep the data stored perfectly and a value of 0 means that it should forget it entirely. Consider $S_{t-1}$ as the incoming (previous) state, $x_t$ as the incoming input, and $W_k$, $B_k$ as the weight and bias for the Keep Gate. Additionally, consider $Old_{t-1}$ as the data previously in memory. What happens can be summarized by this equation:
<br/>
<font size = 4><strong>
$$K_t = \sigma(W_k \times [S_{t-1},x_t] + B_k)$$
$$Old_t = K_t \times Old_{t-1}$$
</strong></font>
<br/>
As you can see, $Old_{t-1}$ was multiplied by value was returned by the Keep Gate -- this value is written in the memory cell. Then, the input and state are passed on to the Input Gate, in which there is another Sigmoid activation applied. Concurrently, the input is processed as normal by whatever processing unit is implemented in the network, and then multiplied by the Sigmoid activation's result, much like the Keep Gate. Consider $W_i$ and $B_i$ as the weight and bias for the Input Gate, and $C_t$ the result of the processing of the inputs by the Recurrent Network.
<br/>
<font size = 4><strong>
$$I_t = \sigma(W_i\times[S_{t-1},x_t]+B_i)$$
$$New_t = I_t \times C_t$$
</strong></font>
<br/>
$New_t$ is the new data to be input into the memory cell. This is then added to whatever value is still stored in memory.
<br/>
<font size = 4><strong>
$$Cell_t = Old_t + New_t$$
</strong></font>
<br/>
We now have the candidate data which is to be kept in the memory cell. The conjunction of the Keep and Input gates work in an analog manner, making it so that it is possible to keep part of the old data and add only part of the new data. Consider however, what would happen if the Forget Gate was set to 0 and the Input Gate was set to 1:
<br/>
<font size = 4><strong>
$$Old_t = 0 \times Old_{t-1}$$
$$New_t = 1 \times C_t$$
$$Cell_t = C_t$$
</strong></font>
<br/>
The old data would be totally forgotten and the new data would overwrite it completely.
The Output Gate functions in a similar manner. To decide what we should output, we take the input data and state and pass it through a Sigmoid function as usual. The contents of our memory cell, however, are pushed onto a Tanh function to bind them between a value of -1 to 1. Consider $W_o$ and $B_o$ as the weight and bias for the Output Gate.
<br/>
<font size = 4><strong>
$$O_t = \sigma(W_o \times [S_{t-1},x_t] + B_o)$$
$$Output_t = O_t \times tanh(Cell_t)$$
</strong></font>
<br/>
And that $Output_t$ is what is output into the Recurrent Network.
<br/>
<img width="384" src="https://ibm.box.com/shared/static/rkr60528r3mz2fmtlpah8lqpg7mcsy0g.png">
<center>The Logistic Function plotted</Center>
As mentioned many times, all three gates are logistic. The reason for this is because it is very easy to backpropagate through them, and as such, it is possible for the model to learn exactly how it is supposed to use this structure. This is one of the reasons for which LSTM is a very strong structure. Additionally, this solves the gradient problems by being able to manipulate values through the gates themselves -- by passing the inputs and outputs through the gates, we have now a easily derivable function modifying our inputs.
In regards to the problem of storing many states over a long period of time, LSTM handles this perfectly by only keeping whatever information is necessary and forgetting it whenever it is not needed anymore. Therefore, LSTMs are a very elegant solution to both problems.
LSTM basics
Lets first create a tiny LSTM network sample to underestand the architecture of LSTM networks.
We need to import the necessary modules for our code. We need numpy and tensorflow, obviously. Additionally, we can import directly the tensorflow.models.rnn.rnn model, which includes the function for building RNNs, and tensorflow.models.rnn.ptb.reader which is the helper module for getting the input data from the dataset we just downloaded.
If you want to learm more take a look at https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/
End of explanation
LSTM_CELL_SIZE = 3
with tf.variable_scope('basic_lstm_cell'):
try:
lstm_cell = tf.contrib.rnn.LSTMCell(LSTM_CELL_SIZE, state_is_tuple=True, reuse=False)
except:
print('LSTM already exists in the current scope. Reset the TF graph to re-create it.')
sample_input = tf.constant([[1,2,3,4,3,2],[3,2,2,2,2,2]],dtype=tf.float32)
state = (tf.zeros([2,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
sess.run(tf.global_variables_initializer())
print (sess.run(sample_input))
Explanation: We want to create a network that has only one LSTM cell. The LSTM cell has 2 hidden nodes, so we need 2 state vector as well. Here, state is a tuple with 2 elements, each one is of size [1 x 2], one for passing information to next time step, and another for passing the state to next layer/output.
End of explanation
cell_state, h_state = sess.run(state)
print('Cell state shape: ', cell_state.shape)
print('Hidden state shape: ', h_state.shape)
print(cell_state)
print(h_state)
Explanation: As we can see, the states are initalized with zeros:
End of explanation
print (sess.run(output))
c_new, h_new = sess.run(state_new)
print(c_new)
print(h_new)
Explanation: Lets look the output and state of the network:
End of explanation
sample_LSTM_CELL_SIZE = 3 #3 hidden nodes (it is equal to time steps)
sample_batch_size = 2
sample_input = tf.constant([
[[1,2,3,4,3,2],
[1,2,1,1,1,2],
[1,2,2,2,2,2]],
[[1,2,3,4,3,2],
[3,2,2,1,1,2],
[0,0,0,0,3,2]]
],dtype=tf.float32)
num_layers = 3
with tf.variable_scope("stacked_lstm"):
try:
multi_lstm_cell = tf.contrib.rnn.MultiRNNCell(
[tf.contrib.rnn.BasicLSTMCell(sample_LSTM_CELL_SIZE, state_is_tuple=True) for _ in range(num_layers)]
)
_initial_state = multi_lstm_cell.zero_state(sample_batch_size, tf.float32)
outputs, new_state = tf.nn.dynamic_rnn(multi_lstm_cell,
sample_input,
dtype=tf.float32,
initial_state=_initial_state)
except ValueError:
print('Stacked LSTM already exists in the current scope. Reset the TF graph to re-create it.')
sess.run(tf.global_variables_initializer())
print (sess.run(sample_input))
sess.run(_initial_state)
print (sess.run(new_state))
print (sess.run(output))
Explanation: Stacked LSTM basecs
What about if we want to have a RNN?
The input should be a Tensor of shape: [batch_size, max_time, ...], in our case it would be (2, 3, 6)
End of explanation |
15,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create some simulated geo data
Geo-data comes in a wide variety of forms, in this case we have a Python dictionary of five latitude and longitude strings, with each coordinate in a coordinate pair separated by a comma.
Step2: While technically unnecessary, because I originally come from R, I am a big fan of dataframes, so let us turn the dictionary of simulated data into a dataframe.
Step3: You can see now that we have a a dataframe with five rows, with each now containing a string of latitude and longitude. Before we can work with the data, we'll need to 1) seperate the strings into latitude and longitude and 2) convert them into floats. The function below does just that.
Step4: Let's take a took a what we have now.
Step5: Awesome. This is exactly what we want to see, one column of floats for latitude and one column of floats for longitude.
Reverse Geocoding
To reverse geocode, we feed a specific latitude and longitude pair, in this case the first row (indexed as '0') into pygeocoder's reverse_geocoder function.
Step6: Now we can take can start pulling out the data that we want.
Step7: Geocoding
For geocoding, we need to submit a string containing an address or location (such as a city) into the geocode function. However, not all strings are formatted in a way that Google's geo-API can make sense of them. We can text if an input is valid by using the .geocode().valid_address function.
Step8: Because the output was True, we now know that this is a valid address and thus can print the latitude and longitude coordinates.
Step9: But even more interesting, once the address is processed by the Google geo API, we can parse it and easily separate street numbers, street names, etc. | Python Code:
# Load packages
from pygeocoder import Geocoder
import pandas as pd
import numpy as np
Explanation: Title: Geocoding And Reverse Geocoding
Slug: geocoding_and_reverse_geocoding
Summary: Geocoding And Reverse Geocoding
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Geocoding (converting a physical address or location into latitude/longitude) and reverse geocoding (converting a lat/long to a physical address or location) are common tasks when working with geo-data.
Python offers a number of packages to make the task incredibly easy. In the tutorial below, I use pygeocoder, a wrapper for Google's geo-API, to both geocode and reverse geocode.
Preliminaries
First we want to load the packages we will want to use in the script. Specifically, I am loading pygeocoder for its geo-functionality, pandas for its dataframe structures, and numpy for its missing value (np.nan) functionality.
End of explanation
# Create a dictionary of raw data
data = {'Site 1': '31.336968, -109.560959',
'Site 2': '31.347745, -108.229963',
'Site 3': '32.277621, -107.734724',
'Site 4': '31.655494, -106.420484',
'Site 5': '30.295053, -104.014528'}
Explanation: Create some simulated geo data
Geo-data comes in a wide variety of forms, in this case we have a Python dictionary of five latitude and longitude strings, with each coordinate in a coordinate pair separated by a comma.
End of explanation
# Convert the dictionary into a pandas dataframe
df = pd.DataFrame.from_dict(data, orient='index')
# View the dataframe
df
Explanation: While technically unnecessary, because I originally come from R, I am a big fan of dataframes, so let us turn the dictionary of simulated data into a dataframe.
End of explanation
# Create two lists for the loop results to be placed
lat = []
lon = []
# For each row in a varible,
for row in df[0]:
# Try to,
try:
# Split the row by comma, convert to float, and append
# everything before the comma to lat
lat.append(float(row.split(',')[0]))
# Split the row by comma, convert to float, and append
# everything after the comma to lon
lon.append(float(row.split(',')[1]))
# But if you get an error
except:
# append a missing value to lat
lat.append(np.NaN)
# append a missing value to lon
lon.append(np.NaN)
# Create two new columns from lat and lon
df['latitude'] = lat
df['longitude'] = lon
Explanation: You can see now that we have a a dataframe with five rows, with each now containing a string of latitude and longitude. Before we can work with the data, we'll need to 1) seperate the strings into latitude and longitude and 2) convert them into floats. The function below does just that.
End of explanation
# View the dataframe
df
Explanation: Let's take a took a what we have now.
End of explanation
# Convert longitude and latitude to a location
results = Geocoder.reverse_geocode(df['latitude'][0], df['longitude'][0])
Explanation: Awesome. This is exactly what we want to see, one column of floats for latitude and one column of floats for longitude.
Reverse Geocoding
To reverse geocode, we feed a specific latitude and longitude pair, in this case the first row (indexed as '0') into pygeocoder's reverse_geocoder function.
End of explanation
# Print the lat/long
results.coordinates
# Print the city
results.city
# Print the country
results.country
# Print the street address (if applicable)
results.street_address
# Print the admin1 level
results.administrative_area_level_1
Explanation: Now we can take can start pulling out the data that we want.
End of explanation
# Verify that an address is valid (i.e. in Google's system)
Geocoder.geocode("4207 N Washington Ave, Douglas, AZ 85607").valid_address
Explanation: Geocoding
For geocoding, we need to submit a string containing an address or location (such as a city) into the geocode function. However, not all strings are formatted in a way that Google's geo-API can make sense of them. We can text if an input is valid by using the .geocode().valid_address function.
End of explanation
# Print the lat/long
results.coordinates
Explanation: Because the output was True, we now know that this is a valid address and thus can print the latitude and longitude coordinates.
End of explanation
# Find the lat/long of a certain address
result = Geocoder.geocode("7250 South Tucson Boulevard, Tucson, AZ 85756")
# Print the street number
result.street_number
# Print the street name
result.route
Explanation: But even more interesting, once the address is processed by the Google geo API, we can parse it and easily separate street numbers, street names, etc.
End of explanation |
15,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
admissionDrug
The following columns are available
Step2: Examine a single patient
Step4: Here we can see that these drugs were documented 2153 minutes (1.5 days) after ICU admission, but administered 87132 minutes (60 days) before ICU admission (thus, the negative offset). Since it's reasonable to assume the patient is still taking the drug (as this is the admissiondrug table), drugoffset can likely be treated as a start time for a prescription of the drug.
Identifying patients admitted on a single drug
Let's look for patients who were admitted on Zaroxolyn.
Step6: Instead of using the drug name, we could try to use the HICL code.
Step7: As we can see, using the HICL returned many more observations. Let's take a look at a few
Step9: All the rows use the drug name "Metolazone". Metolazone is the generic name for the brand Zaroxolyn. This demonstrates the utility of using HICL codes to identify drugs - synonyms like these are very common and can be tedious to find.
Hospitals with data available | Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
Explanation: admissionDrug
The following columns are available:
admissiondrugid - primary key, has no meaning but identifies rows uniquely
drugOffset - number of minutes from unit admit time that the admission drug was administered
drugEnteredOffset - number of minutes from unit admit time that the admission drug was entered
drugNoteType - unique note picklist types e.g.: Comprehensive Progress Admission Intubation
specialtyType - physician specialty picklist types e.g.: anesthesiology gastroenterology oncology
userType - who documented the drug from eCareManager user picklist types e.g.: eICU Physician, Nurse, Attending Physician
rxincluded - Does the Note have associated Rx data: True or False
writtenIneICU - Was the Note written in the eICU: True or False
drugName - name of the selected admission drug e.g.: POTASSIUM CHLORIDE/D5NS METAXALONE PRAVACHOL
drugDosage - dosage of the admission drug e.g.: 20.0000 400.000
drugUnit - picklist units of the admission drug e.g.: mg mg/kg patch
drugAdmitFrequency - picklist frequency with which the admission drug is administred e.g.: PRN twice a day at bedtime
drughiclseqno - a code representing the drug (hierarchical ingredient code list, HICL)
We recommend configuring the config.ini file to allow for connection to the database without specifying your password each time.
End of explanation
patientunitstayid = 2704494
query = query_schema +
select *
from admissiondrug
where patientunitstayid = {}
order by drugoffset
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
# Look at a subset of columns
cols = ['admissiondrugid','patientunitstayid','drugoffset','drugenteredoffset','drugname','drughiclseqno']
df[cols].head()
Explanation: Examine a single patient
End of explanation
drug = 'ZAROXOLYN'
query = query_schema +
select
admissiondrugid, patientunitstayid
, drugoffset, drugenteredoffset
, drugname, drughiclseqno
from admissiondrug
where drugname = '{}'
.format(drug)
df_drug = pd.read_sql_query(query, con)
df_drug.set_index('admissiondrugid',inplace=True)
print('{} unit stays with {}.'.format(df_drug['patientunitstayid'].nunique(), drug))
Explanation: Here we can see that these drugs were documented 2153 minutes (1.5 days) after ICU admission, but administered 87132 minutes (60 days) before ICU admission (thus, the negative offset). Since it's reasonable to assume the patient is still taking the drug (as this is the admissiondrug table), drugoffset can likely be treated as a start time for a prescription of the drug.
Identifying patients admitted on a single drug
Let's look for patients who were admitted on Zaroxolyn.
End of explanation
hicl = 3663
query = query_schema +
select
admissiondrugid, patientunitstayid
, drugoffset, drugenteredoffset
, drugname, drughiclseqno
from admissiondrug
where drughiclseqno = {}
.format(hicl)
df_hicl = pd.read_sql_query(query, con)
df_hicl.set_index('admissiondrugid',inplace=True)
print('{} unit stays with HICL = {}.'.format(df_hicl['patientunitstayid'].nunique(), hicl))
Explanation: Instead of using the drug name, we could try to use the HICL code.
End of explanation
# rows in HICL which are *not* in the drug dataframe
idx = ~df_hicl.index.isin(df_drug.index)
# count the drug names
df_hicl.loc[idx, 'drugname'].value_counts()
Explanation: As we can see, using the HICL returned many more observations. Let's take a look at a few:
End of explanation
query = query_schema +
select
pt.hospitalid
, count(pt.patientunitstayid) as number_of_patients
, count(ad.patientunitstayid) as number_of_patients_with_admdrug
from patient pt
left join admissiondrug ad
on pt.patientunitstayid = ad.patientunitstayid
group by pt.hospitalid
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_admdrug'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_admdrug', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
Explanation: All the rows use the drug name "Metolazone". Metolazone is the generic name for the brand Zaroxolyn. This demonstrates the utility of using HICL codes to identify drugs - synonyms like these are very common and can be tedious to find.
Hospitals with data available
End of explanation |
15,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Agile and Test-Driven Development
TDD Worked Example
Robert Haines, University of Manchester, UK
Adapted from "Test-Driven Development By Example", Kent Beck
Introduction
Very simple example
Implement a function to return the nth number in the Fibonacci sequence
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, ...
http
Step1: Step 1
Step2: Step 1
Step3: Step 2
Step4: Step 2
Step5: Step 3
Step6: Step 3
Step7: Step 4
Step8: Step 5
Step9: Step 5
Step10: Pause
How many tests are we going to write?
Just how big is the set of if statements going to get if we carry on like this?
Where do we stop?
Remember
Step11: Step 7
Step12: Step 8 | Python Code:
import unittest
def run_tests():
suite = unittest.TestLoader().loadTestsFromTestCase(TestFibonacci)
unittest.TextTestRunner().run(suite)
Explanation: Agile and Test-Driven Development
TDD Worked Example
Robert Haines, University of Manchester, UK
Adapted from "Test-Driven Development By Example", Kent Beck
Introduction
Very simple example
Implement a function to return the nth number in the Fibonacci sequence
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, ...
http://oeis.org/A000045
$
F_0 = 0, \
F_1 = 1, \
F_n = F_{n-1} + F_{n-2}
$
Step 0a: Local python setup
You need to do this if you're using python on the command line.
Create two directories
src
test
Add the src directory to PYTHONPATH
$ export PYTHONPATH=`pwd`/src
Step 0b: IPython setup
You need to do this if you're using this IPython Notebook.
The run_tests() method, below, is called at the end of each step to run the tests.
End of explanation
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
run_tests()
Explanation: Step 1: Write a test (and run it)
End of explanation
def fibonacci(n):
return 0
run_tests()
Explanation: Step 1: Implement and re-test
End of explanation
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
run_tests()
Explanation: Step 2: Write a test (and run it)
End of explanation
def fibonacci(n):
if n == 0: return 0
return 1
run_tests()
Explanation: Step 2: Implement and re-test
End of explanation
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
run_tests()
Explanation: Step 3: Write a test (and run it)
End of explanation
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
self.assertEqual(2, fibonacci(3), "fibonacci(3) should equal 2")
run_tests()
Explanation: Step 3: It works!
The current code outputs 1 whenever n is not 0. So this behaviour is correct.
Step 4: Write a test (and run it)
End of explanation
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
return 2
run_tests()
Explanation: Step 4: Implement and re-test
End of explanation
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
self.assertEqual(2, fibonacci(3), "fibonacci(3) should equal 2")
self.assertEqual(3, fibonacci(4), "fibonacci(4) should equal 3")
run_tests()
Explanation: Step 5: Write a test (and run it)
End of explanation
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
if n == 3: return 2
return 3
run_tests()
Explanation: Step 5: Implement and re-test
End of explanation
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
if n == 3: return 2
return 2 + 1
run_tests()
Explanation: Pause
How many tests are we going to write?
Just how big is the set of if statements going to get if we carry on like this?
Where do we stop?
Remember:
$
F_0 = 0, \
F_1 = 1, \
F_n = F_{n-1} + F_{n-2}
$
Can we reflect that in the code?
Step 6: Refactor and test
End of explanation
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
return fibonacci(n - 1) + fibonacci(n - 2)
run_tests()
Explanation: Step 7: Refactor and test
End of explanation
def fibonacci(n):
if n == 0: return 0
if n == 1: return 1
return fibonacci(n - 1) + fibonacci(n - 2)
run_tests()
Explanation: Step 8: Refactor and test (and done)
End of explanation |
15,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
You are a Starbucks big data analyst (that’s a real job!) looking to find the next store into a Starbucks Reserve Roastery. These roasteries are much larger than a typical Starbucks store and have several additional features, including various food and wine options, along with upscale lounge areas. You'll investigate the demographics of various counties in the state of California, to determine potentially suitable locations.
<center>
<img src="https
Step1: You'll use the embed_map() function from the previous exercise to visualize your maps.
Step2: Exercises
1) Geocode the missing locations.
Run the next code cell to create a DataFrame starbucks containing Starbucks locations in the state of California.
Step3: Most of the stores have known (latitude, longitude) locations. But, all of the locations in the city of Berkeley are missing.
Step4: Use the code cell below to fill in these values with the Nominatim geocoder.
Note that in the tutorial, we used Nominatim() (from geopy.geocoders) to geocode values, and this is what you can use in your own projects outside of this course.
In this exercise, you will use a slightly different function Nominatim() (from learntools.geospatial.tools). This function was imported at the top of the notebook and works identically to the function from GeoPandas.
So, in other words, as long as
Step5: 2) View Berkeley locations.
Let's take a look at the locations you just found. Visualize the (latitude, longitude) locations in Berkeley in the OpenStreetMap style.
Step6: Considering only the five locations in Berkeley, how many of the (latitude, longitude) locations seem potentially correct (are located in the correct city)?
Step7: 3) Consolidate your data.
Run the code below to load a GeoDataFrame CA_counties containing the name, area (in square kilometers), and a unique id (in the "GEOID" column) for each county in the state of California. The "geometry" column contains a polygon with county boundaries.
Step8: Next, we create three DataFrames
Step9: Use the next code cell to join the CA_counties GeoDataFrame with CA_pop, CA_high_earners, and CA_median_age.
Name the resultant GeoDataFrame CA_stats, and make sure it has 8 columns
Step10: Now that we have all of the data in one place, it's much easier to calculate statistics that use a combination of columns. Run the next code cell to create a "density" column with the population density.
Step11: 4) Which counties look promising?
Collapsing all of the information into a single GeoDataFrame also makes it much easier to select counties that meet specific criteria.
Use the next code cell to create a GeoDataFrame sel_counties that contains a subset of the rows (and all of the columns) from the CA_stats GeoDataFrame. In particular, you should select counties where
Step12: 5) How many stores did you identify?
When looking for the next Starbucks Reserve Roastery location, you'd like to consider all of the stores within the counties that you selected. So, how many stores are within the selected counties?
To prepare to answer this question, run the next code cell to create a GeoDataFrame starbucks_gdf with all of the starbucks locations.
Step13: So, how many stores are in the counties you selected?
Step14: 6) Visualize the store locations.
Create a map that shows the locations of the stores that you identified in the previous question. | Python Code:
import math
import pandas as pd
import geopandas as gpd
#from geopy.geocoders import Nominatim # What you'd normally run
from learntools.geospatial.tools import Nominatim # Just for this exercise
import folium
from folium import Marker
from folium.plugins import MarkerCluster
from learntools.core import binder
binder.bind(globals())
from learntools.geospatial.ex4 import *
Explanation: Introduction
You are a Starbucks big data analyst (that’s a real job!) looking to find the next store into a Starbucks Reserve Roastery. These roasteries are much larger than a typical Starbucks store and have several additional features, including various food and wine options, along with upscale lounge areas. You'll investigate the demographics of various counties in the state of California, to determine potentially suitable locations.
<center>
<img src="https://i.imgur.com/BIyE6kR.png" width="450"><br/><br/>
</center>
Before you get started, run the code cell below to set everything up.
End of explanation
def embed_map(m, file_name):
from IPython.display import IFrame
m.save(file_name)
return IFrame(file_name, width='100%', height='500px')
Explanation: You'll use the embed_map() function from the previous exercise to visualize your maps.
End of explanation
# Load and preview Starbucks locations in California
starbucks = pd.read_csv("../input/geospatial-learn-course-data/starbucks_locations.csv")
starbucks.head()
Explanation: Exercises
1) Geocode the missing locations.
Run the next code cell to create a DataFrame starbucks containing Starbucks locations in the state of California.
End of explanation
# How many rows in each column have missing values?
print(starbucks.isnull().sum())
# View rows with missing locations
rows_with_missing = starbucks[starbucks["City"]=="Berkeley"]
rows_with_missing
Explanation: Most of the stores have known (latitude, longitude) locations. But, all of the locations in the city of Berkeley are missing.
End of explanation
# Create the geocoder
geolocator = Nominatim(user_agent="kaggle_learn")
# Your code here
____
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
def my_geocoder(row):
point = geolocator.geocode(row).point
return pd.Series({'Longitude': point.longitude, 'Latitude': point.latitude})
berkeley_locations = rows_with_missing.apply(lambda x: my_geocoder(x['Address']), axis=1)
starbucks.update(berkeley_locations)
q_1.assert_check_passed()
#%%RM_IF(PROD)%%
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
# Line below will give you solution code
#_COMMENT_IF(PROD)_
q_1.solution()
Explanation: Use the code cell below to fill in these values with the Nominatim geocoder.
Note that in the tutorial, we used Nominatim() (from geopy.geocoders) to geocode values, and this is what you can use in your own projects outside of this course.
In this exercise, you will use a slightly different function Nominatim() (from learntools.geospatial.tools). This function was imported at the top of the notebook and works identically to the function from GeoPandas.
So, in other words, as long as:
- you don't change the import statements at the top of the notebook, and
- you call the geocoding function as geocode() in the code cell below,
your code will work as intended!
End of explanation
# Create a base map
m_2 = folium.Map(location=[37.88,-122.26], zoom_start=13)
# Your code here: Add a marker for each Berkeley location
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_2.a.hint()
# Show the map
embed_map(m_2, 'q_2.html')
#%%RM_IF(PROD)%%
# Create a base map
m_2 = folium.Map(location=[37.88,-122.26], zoom_start=13)
# Add a marker for each Berkeley location
for idx, row in starbucks[starbucks["City"]=='Berkeley'].iterrows():
Marker([row['Latitude'], row['Longitude']]).add_to(m_2)
# Show the map
embed_map(m_2, 'q_2.html')
# Get credit for your work after you have created a map
q_2.a.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
q_2.a.solution()
Explanation: 2) View Berkeley locations.
Let's take a look at the locations you just found. Visualize the (latitude, longitude) locations in Berkeley in the OpenStreetMap style.
End of explanation
# View the solution (Run this code cell to receive credit!)
q_2.b.solution()
Explanation: Considering only the five locations in Berkeley, how many of the (latitude, longitude) locations seem potentially correct (are located in the correct city)?
End of explanation
CA_counties = gpd.read_file("../input/geospatial-learn-course-data/CA_county_boundaries/CA_county_boundaries/CA_county_boundaries.shp")
CA_counties.crs = {'init': 'epsg:4326'}
CA_counties.head()
Explanation: 3) Consolidate your data.
Run the code below to load a GeoDataFrame CA_counties containing the name, area (in square kilometers), and a unique id (in the "GEOID" column) for each county in the state of California. The "geometry" column contains a polygon with county boundaries.
End of explanation
CA_pop = pd.read_csv("../input/geospatial-learn-course-data/CA_county_population.csv", index_col="GEOID")
CA_high_earners = pd.read_csv("../input/geospatial-learn-course-data/CA_county_high_earners.csv", index_col="GEOID")
CA_median_age = pd.read_csv("../input/geospatial-learn-course-data/CA_county_median_age.csv", index_col="GEOID")
Explanation: Next, we create three DataFrames:
- CA_pop contains an estimate of the population of each county.
- CA_high_earners contains the number of households with an income of at least $150,000 per year.
- CA_median_age contains the median age for each county.
End of explanation
# Your code here
CA_stats = ____
# Check your answer
q_3.check()
#%%RM_IF(PROD)%%
CA_stats = CA_counties.set_index("GEOID").join([CA_pop, CA_high_earners, CA_median_age])
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
CA_stats = CA_counties.set_index("GEOID", inplace=False).join([CA_high_earners, CA_median_age, CA_pop]).reset_index()
q_3.assert_check_passed()
#%%RM_IF(PROD)%%
cols_to_add = CA_pop.join([CA_median_age, CA_high_earners]).reset_index()
CA_stats = CA_counties.merge(cols_to_add, on="GEOID")
q_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
Explanation: Use the next code cell to join the CA_counties GeoDataFrame with CA_pop, CA_high_earners, and CA_median_age.
Name the resultant GeoDataFrame CA_stats, and make sure it has 8 columns: "GEOID", "name", "area_sqkm", "geometry", "population", "high_earners", and "median_age".
End of explanation
CA_stats["density"] = CA_stats["population"] / CA_stats["area_sqkm"]
Explanation: Now that we have all of the data in one place, it's much easier to calculate statistics that use a combination of columns. Run the next code cell to create a "density" column with the population density.
End of explanation
# Your code here
sel_counties = ____
# Check your answer
q_4.check()
#%%RM_IF(PROD)%%
sel_counties = CA_stats[((CA_stats.high_earners > 100000) & \
(CA_stats.median_age < 38.5) & \
(CA_stats.density > 285) & \
((CA_stats.median_age < 35.5) | \
(CA_stats.density > 1400) | \
(CA_stats.high_earners > 500000)))]
q_4.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.solution()
Explanation: 4) Which counties look promising?
Collapsing all of the information into a single GeoDataFrame also makes it much easier to select counties that meet specific criteria.
Use the next code cell to create a GeoDataFrame sel_counties that contains a subset of the rows (and all of the columns) from the CA_stats GeoDataFrame. In particular, you should select counties where:
- there are at least 100,000 households making \$150,000 per year,
- the median age is less than 38.5, and
- the density of inhabitants is at least 285 (per square kilometer).
Additionally, selected counties should satisfy at least one of the following criteria:
- there are at least 500,000 households making \$150,000 per year,
- the median age is less than 35.5, or
- the density of inhabitants is at least 1400 (per square kilometer).
End of explanation
starbucks_gdf = gpd.GeoDataFrame(starbucks, geometry=gpd.points_from_xy(starbucks.Longitude, starbucks.Latitude))
starbucks_gdf.crs = {'init': 'epsg:4326'}
Explanation: 5) How many stores did you identify?
When looking for the next Starbucks Reserve Roastery location, you'd like to consider all of the stores within the counties that you selected. So, how many stores are within the selected counties?
To prepare to answer this question, run the next code cell to create a GeoDataFrame starbucks_gdf with all of the starbucks locations.
End of explanation
# Fill in your answer
num_stores = ____
# Check your answer
q_5.check()
#%%RM_IF(PROD)%%
locations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)
num_stores = len(locations_of_interest)
q_5.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_5.hint()
#_COMMENT_IF(PROD)_
q_5.solution()
Explanation: So, how many stores are in the counties you selected?
End of explanation
# Create a base map
m_6 = folium.Map(location=[37,-120], zoom_start=6)
# Your code here: show selected store locations
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_6.hint()
# Show the map
embed_map(m_6, 'q_6.html')
#%%RM_IF(PROD)%%
# Create a base map
m_6 = folium.Map(location=[37,-120], zoom_start=6)
# Your code here: show selected store locations
mc = MarkerCluster()
locations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)
for idx, row in locations_of_interest.iterrows():
if not math.isnan(row['Longitude']) and not math.isnan(row['Latitude']):
mc.add_child(folium.Marker([row['Latitude'], row['Longitude']]))
m_6.add_child(mc)
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_6.hint()
# Show the map
embed_map(m_6, 'q_6.html')
# Get credit for your work after you have created a map
q_6.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
q_6.solution()
Explanation: 6) Visualize the store locations.
Create a map that shows the locations of the stores that you identified in the previous question.
End of explanation |
15,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FASTQ format
The file is organized in 4 lines per read
Step1: Count the number of lines in the file (4 times the number of reads)
Step2: There are 40 M lines in the file, which means 10 M reads in total.
Quality check before mapping | Python Code:
for renz in ['HindIII', 'MboI']:
print renz
! head -n 4 /media/storage/FASTQs/K562_"$renz"_1.fastq
print ''
Explanation: FASTQ format
The file is organized in 4 lines per read:
1 - The header of the DNA sequence with the read id (the read length is optional)
2 - The DNA sequence
3 - The header of the sequence quality (this line could be either a repetition of line 1 or empty)
4 - The sequence quality (it is not human readble, but is provided as PHRED score. Check https://en.wikipedia.org/wiki/Phred_quality_score for more details)
End of explanation
! wc -l /media/storage/FASTQs/K562_HindIII_1.fastq
Explanation: Count the number of lines in the file (4 times the number of reads)
End of explanation
from pytadbit.utils.fastq_utils import quality_plot
for r_enz in ['HindIII', 'MboI']:
quality_plot('/media/storage/FASTQs/K562_{0}_1.fastq'.format(r_enz), r_enz=r_enz,
nreads=1000000, paired=False)
Explanation: There are 40 M lines in the file, which means 10 M reads in total.
Quality check before mapping
End of explanation |
15,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tight Binding program to compute the band structure of simple semiconductors.
Parameters taken from Vogl, Hjalmarson and Dow,
A Semiempirical Tight-Binding Theory of the Electronic Structure
of Semiconductors,
J. Phys. Chem. Sol. 44 (5), pp 365-378 (1983).
Step1: Interpolated SiGe band structure
The band structure for Si(1-x)Ge(x) is supposedly well approximated by a linear interpolation of the Si and Ge band structures.
Step2: Plotting misc parts of Brillouin zones
Compare Si and Ge CBs. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import eigvalsh
from collections import namedtuple
import TB
TB.band(TB.Si)
TB.band(TB.GaAs)
TB.band(TB.Ge)
Explanation: Tight Binding program to compute the band structure of simple semiconductors.
Parameters taken from Vogl, Hjalmarson and Dow,
A Semiempirical Tight-Binding Theory of the Electronic Structure
of Semiconductors,
J. Phys. Chem. Sol. 44 (5), pp 365-378 (1983).
End of explanation
def SiGe_band(x=0.2):
Si_data = TB.bandpts(TB.Si)
Ge_data = TB.bandpts(TB.Ge)
data = (1-x)*Si_data + x*Ge_data
TB.bandplt("SiGe, %%Ge=%.2f" % x,data)
return
SiGe_band(0)
SiGe_band(0.1)
SiGe_band(0.25)
SiGe_band(0.37)
Explanation: Interpolated SiGe band structure
The band structure for Si(1-x)Ge(x) is supposedly well approximated by a linear interpolation of the Si and Ge band structures.
End of explanation
Ge_CB = TB.bandpts(TB.Ge)[:,4]
Si_CB = TB.bandpts(TB.Si)[:,4]
nk = len(Si_CB)
n = (nk-2)//3
plt.plot(Si_CB)
plt.plot(Ge_CB)
TB.band_labels(n)
plt.axis(xmax=3*n+1)
plt.plot(Si_CB,label='Si')
plt.plot(Ge_CB,label='Ge')
plt.plot(0.9*Si_CB + 0.1*Ge_CB,label='Si_0.9 Ge_0.1')
TB.band_labels(n)
plt.axis(xmax=3*n+1)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
min_Si = min(Si_CB)
min_Ge = min(Ge_CB)
print min_Si, min_Ge
# min_Si - min_Ge = 0.12
Si_CB_shifted = Si_CB - min_Si + min_Ge + 0.12
Explanation: Plotting misc parts of Brillouin zones
Compare Si and Ge CBs.
End of explanation |
15,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
initialize the Cosmological models
Step1: Define proxy modelling
Use a mass proxy, define the probability for observing a proxy given a mass and redhsift
$$
P(\log\lambda|M,z) = N(\mu(M,z), \sigma^2(M,z))
$$
the mean is
$$
\mu(M,z) = \mu_0 + a_\mu^M\log_{10}\frac{M}{M_0} + a_\mu^z\log_{10}\frac{1+z}{1+z_0}
$$
variance is
$$
\sigma(M,z) = \sigma_0 + a_\sigma^M\log_{10}\frac{M}{M_0} + a_\sigma ^z\log_{10}\frac{1+z}{1+z_0}
$$
Step2: initialize the ClusterAbundance object | Python Code:
#CCL cosmology
cosmo_ccl = ccl.Cosmology(Omega_c = 0.30711 - 0.048254, Omega_b = 0.048254, h = 0.677, sigma8 = 0.8822714165197718, n_s=0.96, Omega_k = 0, transfer_function='eisenstein_hu')
#ccl_cosmo_set_high_prec (cosmo_ccl)
cosmo_numcosmo, dist, ps_lin, ps_nln, hmfunc = create_nc_obj (cosmo_ccl)
psf = hmfunc.peek_psf ()
Explanation: initialize the Cosmological models
End of explanation
#CosmoSim_proxy model
#M_0, z_0
theta_pivot = [3e14/0.71, 0.6]
#\mu_0, a_\mu^z, a_\mu^M
theta_mu = [3.19, -0.7, 2]
#\sigma_0, a_\sigma^z, a_\sigma^M
theta_sigma = [0.33, 0.,-0.08]
#Richness object
area = (0.25)*4*np.pi / 100.0
lnRl = 1.0
lnRu = 2.0
zl = 0.25
zu = 1.0
#Numcosmo_proxy model
cluster_z = nc.ClusterRedshift.new_from_name("NcClusterRedshiftNodist{'z-min': <%20.15e>, 'z-max':<%20.15e>}" % (zl, zu))
cluster_m = nc.ClusterMass.new_from_name("NcClusterMassAscaso{'M0':<%20.15e>,'z0':<%20.15e>,'lnRichness-min':<%20.15e>, 'lnRichness-max':<%20.15e>}" % (3e14/(0.71),0.6, lnRl, lnRu))
cluster_m.param_set_by_name('mup0', 3.19)
cluster_m.param_set_by_name('mup1', 2/np.log(10))
cluster_m.param_set_by_name('mup2', -0.7/np.log(10))
cluster_m.param_set_by_name('sigmap0', 0.33)
cluster_m.param_set_by_name('sigmap1', -0.08/np.log(10))
cluster_m.param_set_by_name('sigmap2', 0/np.log(10))
Explanation: Define proxy modelling
Use a mass proxy, define the probability for observing a proxy given a mass and redhsift
$$
P(\log\lambda|M,z) = N(\mu(M,z), \sigma^2(M,z))
$$
the mean is
$$
\mu(M,z) = \mu_0 + a_\mu^M\log_{10}\frac{M}{M_0} + a_\mu^z\log_{10}\frac{1+z}{1+z_0}
$$
variance is
$$
\sigma(M,z) = \sigma_0 + a_\sigma^M\log_{10}\frac{M}{M_0} + a_\sigma ^z\log_{10}\frac{1+z}{1+z_0}
$$
End of explanation
#Numcosmo Cluster Abundance
#First we need to define the multiplicity function here we will use the tinker
mulf = nc.MultiplicityFuncTinker.new()
mulf.set_linear_interp (True)
mulf.set_mdef(nc.MultiplicityFuncMassDef.CRITICAL)
mulf.set_Delta(200)
#Second we need to construct a filtered power spectrum
hmf = nc.HaloMassFunction.new(dist,psf,mulf)
hmf.set_area(area)
ca = nc.ClusterAbundance.new(hmf,None)
mset = ncm.MSet.new_array([cosmo_numcosmo,cluster_m,cluster_z])
ncount = Nc.DataClusterNCount.new (ca, "NcClusterRedshiftNodist", "NcClusterMassAscaso")
ncount.catalog_load ("ncount_ascaso.fits")
cosmo_numcosmo.props.Omegac_fit = True
cosmo_numcosmo.props.w0_fit = True
cluster_m.props.mup0_fit = True
mset.prepare_fparam_map ()
ncount.set_binned (False)
dset = ncm.Dataset.new ()
dset.append_data (ncount)
lh = Ncm.Likelihood (dataset = dset)
fit = Ncm.Fit.new (Ncm.FitType.NLOPT, "ln-neldermead", lh, mset, Ncm.FitGradType.NUMDIFF_FORWARD)
fitmc = Ncm.FitMC.new (fit, Ncm.FitMCResampleType.FROM_MODEL, Ncm.FitRunMsgs.SIMPLE)
fitmc.set_nthreads (3)
fitmc.set_data_file ("ncount_ascaso_mc_unbinned.fits")
fitmc.start_run ()
fitmc.run_lre (1000, 5.0e-3)
fitmc.end_run ()
ntests = 100.0
mcat = fitmc.mcat
mcat.log_current_chain_stats ()
mcat.calc_max_ess_time (ntests, Ncm.FitRunMsgs.FULL);
mcat.calc_heidel_diag (ntests, 0.0, Ncm.FitRunMsgs.FULL);
mset.pretty_log ()
mcat.log_full_covar ()
mcat.log_current_stats ()
be, post_lnnorm_sd = mcat.get_post_lnnorm ()
lnevol, glnvol = mcat.get_post_lnvol (0.6827)
Ncm.cfg_msg_sepa ()
print ("# Bayesian evidence: % 22.15g +/- % 22.15g" % (be, post_lnnorm_sd))
print ("# 1 sigma posterior volume: % 22.15g" % lnevol)
print ("# 1 sigma posterior volume (Gaussian approximation): % 22.15g" % glnvol)
Explanation: initialize the ClusterAbundance object
End of explanation |
15,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28,28,1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28,28,1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu, strides=1)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, strides=(2,2), pool_size=(2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, strides=(2,2), pool_size=(2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, strides=(2,2), pool_size=(2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, filters=16, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, filters=1, kernel_size=(3,3), padding='same', strides=1, activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
15,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download Data
Step1: Process Data
Split data into train and test set and preview data
Step2: Convert to lists in preparation for modeling
Step3: Pre-Process Data For Deep Learning
See this repo for documentation on the ktext package
Step4: Look at one example of processed issue bodies
Step5: Look at one example of processed issue titles
Step6: Serialize all of this to disk for later use
Step7: Define Model Architecture
Load the data from disk into variables
Step8: Define Model Architecture
Step9: Examine Model Architecture Summary
Step10: Train Model
Step11: See Example Results On Holdout Set
It is useful to see examples of real predictions on a holdout set to get a sense of the performance of the model. We will also evaluate the model numerically in a following section.
Step12: Evaluate Model | Python Code:
# Ensure that the github-issues-data volume is mounted in /mnt
!ls -la /mnt
# Set path for data dir
%env DATA_DIR=/mnt/github-issues-data
# Download the github-issues.zip training data to /mnt/github-issues-data
!wget --directory-prefix=${DATA_DIR} https://storage.googleapis.com/kubeflow-examples/github-issue-summarization-data/github-issues.zip
# Unzip the file into /mnt/github-issues-data directory
!unzip ${DATA_DIR}/github-issues.zip -d ${DATA_DIR}
# Create a symlink from <current_directory>/github-issues-data to /mnt/github-issues-data
!ln -sf ${DATA_DIR} github-issues-data
# Make sure that the github-issues-data symlink is created
!ls -lh github-issues-data/github_issues.csv
Explanation: Download Data
End of explanation
data_file='github-issues-data/github_issues.csv'
# read in data sample 2000 rows (for speed of tutorial)
# Set this to False to train on the entire dataset
use_sample_data=True
if use_sample_data:
training_data_size=2000
traindf, testdf = train_test_split(pd.read_csv(data_file).sample(n=training_data_size),
test_size=.10)
else:
traindf, testdf = train_test_split(pd.read_csv(data_file),test_size=.10)
#print out stats about shape of data
print(f'Train: {traindf.shape[0]:,} rows {traindf.shape[1]:,} columns')
print(f'Test: {testdf.shape[0]:,} rows {testdf.shape[1]:,} columns')
# preview data
traindf.head(3)
Explanation: Process Data
Split data into train and test set and preview data
End of explanation
train_body_raw = traindf.body.tolist()
train_title_raw = traindf.issue_title.tolist()
#preview output of first element
train_body_raw[0]
Explanation: Convert to lists in preparation for modeling
End of explanation
%reload_ext autoreload
%autoreload 2
from ktext.preprocess import processor
%%time
# Clean, tokenize, and apply padding / truncating such that each document length = 70
# also, retain only the top 8,000 words in the vocabulary and set the remaining words
# to 1 which will become common index for rare words
body_pp = processor(keep_n=8000, padding_maxlen=70)
train_body_vecs = body_pp.fit_transform(train_body_raw)
Explanation: Pre-Process Data For Deep Learning
See this repo for documentation on the ktext package
End of explanation
print('\noriginal string:\n', train_body_raw[0], '\n')
print('after pre-processing:\n', train_body_vecs[0], '\n')
# Instantiate a text processor for the titles, with some different parameters
# append_indicators = True appends the tokens '_start_' and '_end_' to each
# document
# padding = 'post' means that zero padding is appended to the end of the
# of the document (as opposed to the default which is 'pre')
title_pp = processor(append_indicators=True, keep_n=4500,
padding_maxlen=12, padding ='post')
# process the title data
train_title_vecs = title_pp.fit_transform(train_title_raw)
Explanation: Look at one example of processed issue bodies
End of explanation
print('\noriginal string:\n', train_title_raw[0])
print('after pre-processing:\n', train_title_vecs[0])
Explanation: Look at one example of processed issue titles
End of explanation
import dill as dpickle
import numpy as np
# Save the preprocessor
with open('body_pp.dpkl', 'wb') as f:
dpickle.dump(body_pp, f)
with open('title_pp.dpkl', 'wb') as f:
dpickle.dump(title_pp, f)
# Save the processed data
np.save('train_title_vecs.npy', train_title_vecs)
np.save('train_body_vecs.npy', train_body_vecs)
Explanation: Serialize all of this to disk for later use
End of explanation
from seq2seq_utils import load_decoder_inputs, load_encoder_inputs, load_text_processor
encoder_input_data, doc_length = load_encoder_inputs('train_body_vecs.npy')
decoder_input_data, decoder_target_data = load_decoder_inputs('train_title_vecs.npy')
num_encoder_tokens, body_pp = load_text_processor('body_pp.dpkl')
num_decoder_tokens, title_pp = load_text_processor('title_pp.dpkl')
Explanation: Define Model Architecture
Load the data from disk into variables
End of explanation
%matplotlib inline
from keras.models import Model
from keras.layers import Input, LSTM, GRU, Dense, Embedding, Bidirectional, BatchNormalization
from keras import optimizers
#arbitrarly set latent dimension for embedding and hidden units
latent_dim = 300
##### Define Model Architecture ######
########################
#### Encoder Model ####
encoder_inputs = Input(shape=(doc_length,), name='Encoder-Input')
# Word embeding for encoder (ex: Issue Body)
x = Embedding(num_encoder_tokens, latent_dim, name='Body-Word-Embedding', mask_zero=False)(encoder_inputs)
x = BatchNormalization(name='Encoder-Batchnorm-1')(x)
# Intermediate GRU layer (optional)
#x = GRU(latent_dim, name='Encoder-Intermediate-GRU', return_sequences=True)(x)
#x = BatchNormalization(name='Encoder-Batchnorm-2')(x)
# We do not need the `encoder_output` just the hidden state.
_, state_h = GRU(latent_dim, return_state=True, name='Encoder-Last-GRU')(x)
# Encapsulate the encoder as a separate entity so we can just
# encode without decoding if we want to.
encoder_model = Model(inputs=encoder_inputs, outputs=state_h, name='Encoder-Model')
seq2seq_encoder_out = encoder_model(encoder_inputs)
########################
#### Decoder Model ####
decoder_inputs = Input(shape=(None,), name='Decoder-Input') # for teacher forcing
# Word Embedding For Decoder (ex: Issue Titles)
dec_emb = Embedding(num_decoder_tokens, latent_dim, name='Decoder-Word-Embedding', mask_zero=False)(decoder_inputs)
dec_bn = BatchNormalization(name='Decoder-Batchnorm-1')(dec_emb)
# Set up the decoder, using `decoder_state_input` as initial state.
decoder_gru = GRU(latent_dim, return_state=True, return_sequences=True, name='Decoder-GRU')
decoder_gru_output, _ = decoder_gru(dec_bn, initial_state=seq2seq_encoder_out)
x = BatchNormalization(name='Decoder-Batchnorm-2')(decoder_gru_output)
# Dense layer for prediction
decoder_dense = Dense(num_decoder_tokens, activation='softmax', name='Final-Output-Dense')
decoder_outputs = decoder_dense(x)
########################
#### Seq2Seq Model ####
#seq2seq_decoder_out = decoder_model([decoder_inputs, seq2seq_encoder_out])
seq2seq_Model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
seq2seq_Model.compile(optimizer=optimizers.Nadam(lr=0.001), loss='sparse_categorical_crossentropy')
Explanation: Define Model Architecture
End of explanation
from seq2seq_utils import viz_model_architecture
seq2seq_Model.summary()
viz_model_architecture(seq2seq_Model)
Explanation: Examine Model Architecture Summary
End of explanation
from keras.callbacks import CSVLogger, ModelCheckpoint
script_name_base = 'tutorial_seq2seq'
csv_logger = CSVLogger('{:}.log'.format(script_name_base))
model_checkpoint = ModelCheckpoint('{:}.epoch{{epoch:02d}}-val{{val_loss:.5f}}.hdf5'.format(script_name_base),
save_best_only=True)
batch_size = 1200
epochs = 7
history = seq2seq_Model.fit([encoder_input_data, decoder_input_data], np.expand_dims(decoder_target_data, -1),
batch_size=batch_size,
epochs=epochs,
validation_split=0.12, callbacks=[csv_logger, model_checkpoint])
#save model
seq2seq_Model.save('seq2seq_model_tutorial.h5')
Explanation: Train Model
End of explanation
from seq2seq_utils import Seq2Seq_Inference
seq2seq_inf = Seq2Seq_Inference(encoder_preprocessor=body_pp,
decoder_preprocessor=title_pp,
seq2seq_model=seq2seq_Model)
# this method displays the predictions on random rows of the holdout set
seq2seq_inf.demo_model_predictions(n=50, issue_df=testdf)
Explanation: See Example Results On Holdout Set
It is useful to see examples of real predictions on a holdout set to get a sense of the performance of the model. We will also evaluate the model numerically in a following section.
End of explanation
#convenience function that generates predictions on holdout set and calculates BLEU Score
bleu_score = seq2seq_inf.evaluate_model(holdout_bodies=testdf.body.tolist(),
holdout_titles=testdf.issue_title.tolist(),
max_len_title=12)
print(f'BLEU Score (avg of BLUE 1-4) on Holdout Set: {bleu_score * 100}')
Explanation: Evaluate Model: BLEU Score
For machine-translation tasks such as this one, it is common to measure the accuracy of results using the BLEU Score. The convenience function illustrated below uses NLTK's corpus_bleu. The output of the below convenience function is an Average of BlEU-1, BLEU-2, BLEU-3 and BLEU-4.
End of explanation |
15,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize source time courses (stcs)
This tutorial focuses on visualization of
Step1: Then, we read the stc from file
Step2: This is a
Step3: The SourceEstimate object is in fact a surface source estimate. MNE also
supports volume-based source estimates but more on that later.
We can plot the source estimate using the
Step4: You can also morph it to fsaverage and visualize it using a flatmap
Step5: Note that here we used initial_time=0.1, but we can also browse through
time using time_viewer=True.
In case mayavi is not available, we also offer a matplotlib
backend. Here we use verbose='error' to ignore a warning that not all
vertices were used in plotting.
Step6: Volume Source Estimates
We can also visualize volume source estimates (used for deep structures).
Let us load the sensor-level evoked data. We select the MEG channels
to keep things simple.
Step7: Then, we can load the precomputed inverse operator from a file.
Step8: The source estimate is computed using the inverse operator and the
sensor-space data.
Step9: This time, we have a different container
(
Step10: This too comes with a convenient plot method.
Step11: For this visualization, nilearn must be installed.
This visualization is interactive. Click on any of the anatomical slices
to explore the time series. Clicking on any time point will bring up the
corresponding anatomical map.
We could visualize the source estimate on a glass brain. Unlike the previous
visualization, a glass brain does not show us one slice but what we would
see if the brain was transparent like glass, and
Step12: You can also extract label time courses using volumetric atlases. Here we'll
use the built-in aparc.a2009s+aseg.mgz
Step13: Vector Source Estimates
If we choose to use pick_ori='vector' in
Step14: Dipole fits
For computing a dipole fit, we need to load the noise covariance, the BEM
solution, and the coregistration transformation files. Note that for the
other methods, these were already used to generate the inverse operator.
Step15: Dipoles are fit independently for each time point, so let us crop our time
series to visualize the dipole fit for the time point of interest.
Step16: Finally, we can visualize the dipole. | Python Code:
import os
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne import read_evokeds
data_path = sample.data_path()
sample_dir = os.path.join(data_path, 'MEG', 'sample')
subjects_dir = os.path.join(data_path, 'subjects')
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
Explanation: Visualize source time courses (stcs)
This tutorial focuses on visualization of
:term:stcs <source estimates (abbr. stc)>.
Surface Source Estimates
First, we get the paths for the evoked data and the time courses (stcs).
End of explanation
stc = mne.read_source_estimate(fname_stc, subject='sample')
Explanation: Then, we read the stc from file
End of explanation
print(stc)
Explanation: This is a :class:SourceEstimate <mne.SourceEstimate> object
End of explanation
initial_time = 0.1
brain = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
clim=dict(kind='value', lims=[3, 6, 9]))
Explanation: The SourceEstimate object is in fact a surface source estimate. MNE also
supports volume-based source estimates but more on that later.
We can plot the source estimate using the
:func:stc.plot <mne.SourceEstimate.plot> just as in other MNE
objects. Note that for this visualization to work, you must have mayavi
and pysurfer installed on your machine.
End of explanation
stc_fs = mne.compute_source_morph(stc, 'sample', 'fsaverage', subjects_dir,
smooth=5, verbose='error').apply(stc)
brain = stc_fs.plot(subjects_dir=subjects_dir, initial_time=initial_time,
clim=dict(kind='value', lims=[3, 6, 9]),
surface='flat', hemi='split', size=(1000, 500),
smoothing_steps=5, time_viewer=False,
add_data_kwargs=dict(
colorbar_kwargs=dict(label_font_size=10)))
# You can save a movie like the one on our documentation website with:
# brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16,
# interpolation='linear', framerate=10)
Explanation: You can also morph it to fsaverage and visualize it using a flatmap:
End of explanation
mpl_fig = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
backend='matplotlib', verbose='error')
Explanation: Note that here we used initial_time=0.1, but we can also browse through
time using time_viewer=True.
In case mayavi is not available, we also offer a matplotlib
backend. Here we use verbose='error' to ignore a warning that not all
vertices were used in plotting.
End of explanation
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False).crop(0.05, 0.15)
# this risks aliasing, but these data are very smooth
evoked.decimate(10, verbose='error')
Explanation: Volume Source Estimates
We can also visualize volume source estimates (used for deep structures).
Let us load the sensor-level evoked data. We select the MEG channels
to keep things simple.
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-vol-7-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
src = inv['src']
mri_head_t = inv['mri_head_t']
Explanation: Then, we can load the precomputed inverse operator from a file.
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
stc = apply_inverse(evoked, inv, lambda2, method)
del inv
Explanation: The source estimate is computed using the inverse operator and the
sensor-space data.
End of explanation
print(stc)
Explanation: This time, we have a different container
(:class:VolSourceEstimate <mne.VolSourceEstimate>) for the source time
course.
End of explanation
stc.plot(src, subject='sample', subjects_dir=subjects_dir)
Explanation: This too comes with a convenient plot method.
End of explanation
stc.plot(src, subject='sample', subjects_dir=subjects_dir, mode='glass_brain')
Explanation: For this visualization, nilearn must be installed.
This visualization is interactive. Click on any of the anatomical slices
to explore the time series. Clicking on any time point will bring up the
corresponding anatomical map.
We could visualize the source estimate on a glass brain. Unlike the previous
visualization, a glass brain does not show us one slice but what we would
see if the brain was transparent like glass, and
:term:maximum intensity projection) is used:
End of explanation
fname_aseg = op.join(subjects_dir, 'sample', 'mri', 'aparc.a2009s+aseg.mgz')
label_names = mne.get_volume_labels_from_aseg(fname_aseg)
label_tc = stc.extract_label_time_course(
fname_aseg, src=src, trans=mri_head_t)
lidx, tidx = np.unravel_index(np.argmax(label_tc), label_tc.shape)
fig, ax = plt.subplots(1)
ax.plot(stc.times, label_tc.T, 'k', lw=1., alpha=0.5)
xy = np.array([stc.times[tidx], label_tc[lidx, tidx]])
xytext = xy + [0.01, 1]
ax.annotate(
label_names[lidx], xy, xytext, arrowprops=dict(arrowstyle='->'), color='r')
ax.set(xlim=stc.times[[0, -1]], xlabel='Time (s)', ylabel='Activation')
for key in ('right', 'top'):
ax.spines[key].set_visible(False)
fig.tight_layout()
Explanation: You can also extract label time courses using volumetric atlases. Here we'll
use the built-in aparc.a2009s+aseg.mgz:
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
brain = stc.plot(subject='sample', subjects_dir=subjects_dir,
initial_time=initial_time)
Explanation: Vector Source Estimates
If we choose to use pick_ori='vector' in
:func:apply_inverse <mne.minimum_norm.apply_inverse>
End of explanation
fname_cov = os.path.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = os.path.join(subjects_dir, 'sample', 'bem',
'sample-5120-bem-sol.fif')
fname_trans = os.path.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
Explanation: Dipole fits
For computing a dipole fit, we need to load the noise covariance, the BEM
solution, and the coregistration transformation files. Note that for the
other methods, these were already used to generate the inverse operator.
End of explanation
evoked.crop(0.1, 0.1)
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
Explanation: Dipoles are fit independently for each time point, so let us crop our time
series to visualize the dipole fit for the time point of interest.
End of explanation
dip.plot_locations(fname_trans, 'sample', subjects_dir)
Explanation: Finally, we can visualize the dipole.
End of explanation |
15,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The k-nearest neighbors (kNN) regression algorithm
Author
Step1: 1. The dataset
We describe next the regression task that we will use in the session. The dataset is an adaptation of the <a href=http
Step2: 1.1. Scatter plots
We can get a first rough idea about the regression task representing the scatter plot of each of the one-dimensional variables against the target data.
Step3: 2. Baseline estimation. Using the average of the training set labels
A first very simple method to build the regression model is to use the average of all the target values in the training set as the output of the model, discarding the value of the observation input vector.
This approach can be considered as a baseline, given that any other method making an effective use of the observation variables, statistically related to $s$, should improve the performance of this method.
The prediction is thus given by
Step4: for any input ${\bf x}$.
Exercise 1
Compute the mean square error over training and test sets, for the baseline estimation method.
Step5: Note that in the previous piece of code, function 'square_error' can be used when the second argument is a number instead of a vector with the same length as the first argument. The value will be subtracted from each of the components of the vector provided as the first argument.
Step7: 3. Unidimensional regression with the $k$-nn method
The principles of the $k$-nn method are the following
Step8: 3.1. Evolution of the error with the number of neighbors ($k$)
We see that a small $k$ results in a regression curve that exhibits many and large oscillations. The curve is capturing any noise that may be present in the training data, and <i>overfits</i> the training set. On the other hand, picking a too large $k$ (e.g., 200) the regression curve becomes too smooth, averaging out the values of the labels in the training set over large intervals of the observation variable.
The next code illustrates this effect by plotting the average training and test square errors as a function of $k$.
Step9: As we can see, the error initially decreases achiving a minimum (in the test set) for some finite value of $k$ ($k\approx 10$ for the STOCK dataset). Increasing the value of $k$ beyond that value results in poorer performance.
Exercise 2
Analize the training MSE for $k=1$. Why is it smaller than for any other $k$? Under which conditions will it be exactly zero?
Exercise 3
Modify the code above to visualize the square error from $k=1$ up to $k$ equal to the number of training instances. Can you relate the square error of the $k$-NN method with that of the baseline method for certain value of $k$?
3.1. Influence of the input variable
Having a look at the scatter plots, we can observe that some observation variables seem to have a more clear relationship with the target value. Thus, we can expect that not all variables are equally useful for the regression task. In the following plot, we carry out a study of the performance that can be achieved with each variable.
Note that, in practice, the test labels are not available for the selection of hyperparameter
$k$, so we should be careful about the conclusions of this experiment. A more realistic approach will be studied later when we introduce the concept of model validation.
Step10: 4. Multidimensional regression with the $k$-nn method
In the previous subsection, we have studied the performance of the $k$-nn method when using only one variable. Doing so was convenient, because it allowed us to plot the regression curves in a 2-D plot, and to get some insight about the consequences of modifying the number of neighbors.
For completeness, we evaluate now the performance of the $k$-nn method in this dataset when using all variables together. In fact, when designing a regression model, we should proceed in this manner, using all available information to make as accurate an estimation as possible. In this way, we can also account for correlations that might be present among the different observation variables, and that may carry very relevant information for the regression task.
For instance, in the STOCK dataset, it may be that the combination of the stock values of two airplane companies is more informative about the price of the target company, while the value for a single company is not enough.
<small> Also, in the CONCRETE dataset, it may be that for the particular problem at hand the combination of a large proportion of water and a small proportion of coarse grain is a clear indication of certain compressive strength of the material, while the proportion of water or coarse grain alone are not enough to get to that result.</small>
Step11: In this case, we can check that the average test square error is much lower than the error that was achieved when using only one variable, and also far better than the baseline method. It is also interesting to note that in this particular case the best performance is achieved for a small value of $k$, with the error increasing for larger values of the hyperparameter.
Nevertheless, as we discussed previously, these results should be taken carefully. How would we select the value of $k$, if test labels are (obvioulsy) not available for model validation?
5. Hyperparameter selection via cross-validation
An inconvenient of the application of the $k$-nn method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we kept the value of $k$ that minimized the square error on the training set. However, we also noticed that the location of the minimum is not necessarily the same from the perspective of the test data. Ideally, we would like that the designed regression model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as <i>generalization</i>. Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b>
Since using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the regression model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps
Step12: Exercise 4
Modify the previous code to use only one of the variables in the input dataset
- Following a cross-validation approach, select the best value of $k$ for the $k$-nn based in variable 0 only.
- Compute the test error for the selected valua of $k$.
6. Scikit-learn implementation
In practice, most well-known machine learning methods are implemented and available for python. Probably, the most complete module for machine learning tools is <a href=http | Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pylab
# Packages used to read datasets
import scipy.io # To read matlab files
import pandas as pd # To read datasets in csv format
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
# That's default image size for this interactive session
pylab.rcParams['figure.figsize'] = 9, 6
Explanation: The k-nearest neighbors (kNN) regression algorithm
Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Notebook version: 2.2 (Sep 08, 2017)
Changes: v.1.0 - First version
Changes: v.1.1 - Stock dataset included.
Changes: v.2.0 - Notebook for UTAD course. Advertising data incorporated
Changes: v.2.1 - Text and code revisited. General introduction removed.
Changes: v.2.2 - Compatibility with python 2 and 3.
End of explanation
# SELECT dataset
# Available opitons are 'stock', 'concrete' or 'advertising'
ds_name = 'stock'
# Let us start by loading the data into the workspace, and visualizing the dimensions of all matrices
if ds_name == 'stock':
# STOCK DATASET
data = scipy.io.loadmat('datasets/stock.mat')
X_tr = data['xTrain']
S_tr = data['sTrain']
X_tst = data['xTest']
S_tst = data['sTest']
elif ds_name == 'concrete':
# CONCRETE DATASET.
data = scipy.io.loadmat('datasets/concrete.mat')
X_tr = data['X_tr']
S_tr = data['S_tr']
X_tst = data['X_tst']
S_tst = data['S_tst']
elif ds_name == 'advertising':
# ADVERTISING DATASET
df = pd.read_csv('datasets/Advertising.csv', header=0)
X_tr = df.values[:150, 1:4]
S_tr = df.values[:150, [-1]] # The brackets around -1 is to make sure S_tr is a column vector, as in the other datasets
X_tst = df.values[150:, 1:4]
S_tst = df.values[150:, [-1]]
else:
print('Unknown dataset')
# Print the data dimension and the dataset sizes
print("SELECTED DATASET: " + ds_name)
print("---- The size of the training set is {0}, that is: {1} samples with dimension {2}.".format(
X_tr.shape, X_tr.shape[0], X_tr.shape[1]))
print("---- The target variable of the training set contains {0} samples with dimension {1}".format(
S_tr.shape[0], S_tr.shape[1]))
print("---- The size of the test set is {0}, that is: {1} samples with dimension {2}.".format(
X_tst.shape, X_tst.shape[0], X_tst.shape[1]))
print("---- The target variable of the test set contains {0} samples with dimension {1}".format(
S_tst.shape[0], S_tst.shape[1]))
Explanation: 1. The dataset
We describe next the regression task that we will use in the session. The dataset is an adaptation of the <a href=http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html> STOCK dataset</a>, taken originally from the <a href=http://lib.stat.cmu.edu/> StatLib Repository</a>. The goal of this problem is to predict the values of the stocks of a given airplane company, given the values of another 9 companies in the same day.
<small> (If you are reading this text from the python notebook with its full functionality, you can explore the results of the regression experiments using two alternative datasets:
The
<a href=https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength>CONCRETE dataset</a> which is taken from the <a href=https://archive.ics.uci.edu/ml/index.html>Machine Learning Repository at the University of California Irvine</a>. To do so, just uncomment the block of code entitled CONCRETE and place comments in STOCK in the cell bellow. Remind that you must run the cells again to see the changes. The goal of the CONCRETE dataset tas is to predict the compressive strength of cement mixtures based on eight observed variables related to the composition of the mixture and the age of the material).
The Advertising dataset, taken from the book <a http://www-bcf.usc.edu/~gareth/ISL/data.html> An Introduction to Statistical Learning with applications in R</a>, with permission from the authors: G. James, D. Witten, T. Hastie and R. Tibshirani. The goal of this problem is to predict the sales of a given product, knowing the investment in different advertising sectors. More specifically, the input and output variables can be described as follows:
Input features:
TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
Radio: advertising dollars spent on Radio
Newspaper: advertising dollars spent on Newspaper
Response variable:
Sales: sales of a single product in a given market (in thousands of widgets)
End of explanation
pylab.subplots_adjust(hspace=0.2)
for idx in range(X_tr.shape[1]):
ax1 = plt.subplot(3,3,idx+1)
ax1.plot(X_tr[:,idx],S_tr,'.')
ax1.get_xaxis().set_ticks([])
ax1.get_yaxis().set_ticks([])
plt.show()
Explanation: 1.1. Scatter plots
We can get a first rough idea about the regression task representing the scatter plot of each of the one-dimensional variables against the target data.
End of explanation
# Mean of all target values in the training set
s_hat = np.mean(S_tr)
print(s_hat)
Explanation: 2. Baseline estimation. Using the average of the training set labels
A first very simple method to build the regression model is to use the average of all the target values in the training set as the output of the model, discarding the value of the observation input vector.
This approach can be considered as a baseline, given that any other method making an effective use of the observation variables, statistically related to $s$, should improve the performance of this method.
The prediction is thus given by
End of explanation
# We start by defining a function that calculates the average square error
def square_error(s, s_est):
# Squeeze is used to make sure that s and s_est have the appropriate dimensions.
y = np.mean(np.power((s - s_est), 2))
# y = np.mean(np.power((np.squeeze(s) - np.squeeze(s_est)), 2))
return y
# Mean square error of the baseline prediction over the training data
# MSE_tr = <FILL IN>
MSE_tr = square_error(S_tr, s_hat)
# Mean square error of the baseline prediction over the test data
# MSE_tst = <FILL IN>
MSE_tst = square_error(S_tst, s_hat)
print('Average square error in the training set (baseline method): {0}'.format(MSE_tr))
print('Average square error in the test set (baseline method): {0}'.format(MSE_tst))
Explanation: for any input ${\bf x}$.
Exercise 1
Compute the mean square error over training and test sets, for the baseline estimation method.
End of explanation
if sys.version_info.major == 2:
Test.assertTrue(np.isclose(MSE_tr, square_error(S_tr, s_hat)),'Incorrect value for MSE_tr')
Test.assertTrue(np.isclose(MSE_tst, square_error(S_tst, s_hat)),'Incorrect value for MSE_tst')
Explanation: Note that in the previous piece of code, function 'square_error' can be used when the second argument is a number instead of a vector with the same length as the first argument. The value will be subtracted from each of the components of the vector provided as the first argument.
End of explanation
# We implement unidimensional regression using the k-nn method
# In other words, the estimations are to be made using only one variable at a time
from scipy import spatial
var = 0 # pick a variable (e.g., any value from 0 to 8 for the STOCK dataset)
k = 15 # Number of neighbors
n_points = 1000 # Number of points in the 'x' axis (for representational purposes)
# For representational purposes, we will compute the output of the regression model
# in a series of equally spaced-points along the x-axis
grid_min = np.min([np.min(X_tr[:,var]), np.min(X_tst[:,var])])
grid_max = np.max([np.max(X_tr[:,var]), np.max(X_tst[:,var])])
X_grid = np.linspace(grid_min,grid_max,num=n_points)
def knn_regression(X1, S1, X2, k):
Compute the k-NN regression estimate for the observations contained in
the rows of X2, for the training set given by the rows in X1 and the
components of S1. k is the number of neighbours of the k-NN algorithm
if X1.ndim == 1:
X1 = np.asmatrix(X1).T
if X2.ndim == 1:
X2 = np.asmatrix(X2).T
distances = spatial.distance.cdist(X1,X2,'euclidean')
neighbors = np.argsort(distances, axis=0, kind='quicksort', order=None)
closest = neighbors[range(k),:]
est_values = np.zeros([X2.shape[0],1])
for idx in range(X2.shape[0]):
est_values[idx] = np.mean(S1[closest[:,idx]])
return est_values
est_tst = knn_regression(X_tr[:,var], S_tr, X_tst[:,var], k)
est_grid = knn_regression(X_tr[:,var], S_tr, X_grid, k)
plt.plot(X_tr[:,var], S_tr,'b.',label='Training points')
plt.plot(X_tst[:,var], S_tst,'rx',label='Test points')
plt.plot(X_grid, est_grid,'g-',label='Regression model')
plt.axis('tight')
plt.legend(loc='best')
plt.show()
Explanation: 3. Unidimensional regression with the $k$-nn method
The principles of the $k$-nn method are the following:
For each point where a prediction is to be made, find the $k$ closest neighbors to that point (in the training set)
Obtain the estimation averaging the labels corresponding to the selected neighbors
The number of neighbors is a hyperparameter that plays an important role in the performance of the method. You can test its influence by changing $k$ in the following piece of code. In particular, you can sart with $k=1$ and observe the efect of increasing the value of $k$.
End of explanation
var = 0
k_max = 60
k_max = np.minimum(k_max, X_tr.shape[0]) # k_max cannot be larger than the number of samples
#Be careful with the use of range, e.g., range(3) = [0,1,2] and range(1,3) = [1,2]
MSEk_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:,var],k))
for k in range(1, k_max+1)]
MSEk_tst = [square_error(S_tst,knn_regression(X_tr[:,var], S_tr, X_tst[:,var],k))
for k in range(1, k_max+1)]
kgrid = np.arange(1, k_max+1)
plt.plot(kgrid, MSEk_tr,'bo', label='Training square error')
plt.plot(kgrid, MSEk_tst,'ro', label='Test square error')
plt.xlabel('$k$')
plt.axis('tight')
plt.legend(loc='best')
plt.show()
Explanation: 3.1. Evolution of the error with the number of neighbors ($k$)
We see that a small $k$ results in a regression curve that exhibits many and large oscillations. The curve is capturing any noise that may be present in the training data, and <i>overfits</i> the training set. On the other hand, picking a too large $k$ (e.g., 200) the regression curve becomes too smooth, averaging out the values of the labels in the training set over large intervals of the observation variable.
The next code illustrates this effect by plotting the average training and test square errors as a function of $k$.
End of explanation
k_max = 20
var_performance = []
k_values = []
for var in range(X_tr.shape[1]):
MSE_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:, var], k))
for k in range(1, k_max+1)]
MSE_tst = [square_error(S_tst, knn_regression(X_tr[:,var], S_tr, X_tst[:, var], k))
for k in range(1, k_max+1)]
MSE_tr = np.asarray(MSE_tr)
MSE_tst = np.asarray(MSE_tst)
# We select the variable associated to the value of k for which the training error is minimum
pos = np.argmin(MSE_tr)
k_values.append(pos + 1)
var_performance.append(MSE_tst[pos])
plt.stem(range(X_tr.shape[1]), var_performance)
plt.title('Results of unidimensional regression ($k$NN)')
plt.xlabel('Variable')
plt.ylabel('Test MSE')
plt.figure(2)
plt.stem(range(X_tr.shape[1]), k_values)
plt.xlabel('Variable')
plt.ylabel('$k$')
plt.title('Selection of the hyperparameter')
plt.show()
Explanation: As we can see, the error initially decreases achiving a minimum (in the test set) for some finite value of $k$ ($k\approx 10$ for the STOCK dataset). Increasing the value of $k$ beyond that value results in poorer performance.
Exercise 2
Analize the training MSE for $k=1$. Why is it smaller than for any other $k$? Under which conditions will it be exactly zero?
Exercise 3
Modify the code above to visualize the square error from $k=1$ up to $k$ equal to the number of training instances. Can you relate the square error of the $k$-NN method with that of the baseline method for certain value of $k$?
3.1. Influence of the input variable
Having a look at the scatter plots, we can observe that some observation variables seem to have a more clear relationship with the target value. Thus, we can expect that not all variables are equally useful for the regression task. In the following plot, we carry out a study of the performance that can be achieved with each variable.
Note that, in practice, the test labels are not available for the selection of hyperparameter
$k$, so we should be careful about the conclusions of this experiment. A more realistic approach will be studied later when we introduce the concept of model validation.
End of explanation
k_max = 20
MSE_tr = [square_error(S_tr, knn_regression(X_tr, S_tr, X_tr, k)) for k in range(1, k_max+1)]
MSE_tst = [square_error(S_tst, knn_regression(X_tr, S_tr, X_tst, k)) for k in range(1, k_max+1)]
plt.plot(np.arange(k_max)+1, MSE_tr,'bo',label='Training square error')
plt.plot(np.arange(k_max)+1, MSE_tst,'ro',label='Test square error')
plt.xlabel('k')
plt.ylabel('Square error')
plt.legend(loc='best')
plt.show()
Explanation: 4. Multidimensional regression with the $k$-nn method
In the previous subsection, we have studied the performance of the $k$-nn method when using only one variable. Doing so was convenient, because it allowed us to plot the regression curves in a 2-D plot, and to get some insight about the consequences of modifying the number of neighbors.
For completeness, we evaluate now the performance of the $k$-nn method in this dataset when using all variables together. In fact, when designing a regression model, we should proceed in this manner, using all available information to make as accurate an estimation as possible. In this way, we can also account for correlations that might be present among the different observation variables, and that may carry very relevant information for the regression task.
For instance, in the STOCK dataset, it may be that the combination of the stock values of two airplane companies is more informative about the price of the target company, while the value for a single company is not enough.
<small> Also, in the CONCRETE dataset, it may be that for the particular problem at hand the combination of a large proportion of water and a small proportion of coarse grain is a clear indication of certain compressive strength of the material, while the proportion of water or coarse grain alone are not enough to get to that result.</small>
End of explanation
### This fragment of code runs k-nn with M-fold cross validation
# Parameters:
M = 5 # Number of folds for M-cv
k_max = 40 # Maximum value of the k-nn hyperparameter to explore
# First we compute the train error curve, that will be useful for comparative visualization.
MSE_tr = [square_error(S_tr, knn_regression(X_tr, S_tr, X_tr, k)) for k in range(1, k_max+1)]
## M-CV
# Obtain the indices for the different folds
n_tr = X_tr.shape[0]
permutation = np.random.permutation(n_tr)
# Split the indices in M subsets with (almost) the same size.
set_indices = {i: [] for i in range(M)}
i = 0
for pos in range(n_tr):
set_indices[i].append(permutation[pos])
i = (i+1) % M
# Obtain the validation errors
MSE_val = np.zeros((1,k_max))
for i in range(M):
val_indices = set_indices[i]
# Take out the val_indices from the set of indices.
tr_indices = list(set(permutation) - set(val_indices))
MSE_val_iter = [square_error(S_tr[val_indices],
knn_regression(X_tr[tr_indices, :], S_tr[tr_indices],
X_tr[val_indices, :], k))
for k in range(1, k_max+1)]
MSE_val = MSE_val + np.asarray(MSE_val_iter).T
MSE_val = MSE_val/M
# Select the best k based on the validation error
k_best = np.argmin(MSE_val) + 1
# Compute the final test MSE for the selecte k
MSE_tst = square_error(S_tst, knn_regression(X_tr, S_tr, X_tst, k_best))
plt.plot(np.arange(k_max)+1, MSE_tr, 'bo', label='Training square error')
plt.plot(np.arange(k_max)+1, MSE_val.T, 'go', label='Validation square error')
plt.plot([k_best, k_best], [0, MSE_tst],'r-')
plt.plot(k_best, MSE_tst,'ro',label='Test error')
plt.legend(loc='best')
plt.show()
Explanation: In this case, we can check that the average test square error is much lower than the error that was achieved when using only one variable, and also far better than the baseline method. It is also interesting to note that in this particular case the best performance is achieved for a small value of $k$, with the error increasing for larger values of the hyperparameter.
Nevertheless, as we discussed previously, these results should be taken carefully. How would we select the value of $k$, if test labels are (obvioulsy) not available for model validation?
5. Hyperparameter selection via cross-validation
An inconvenient of the application of the $k$-nn method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we kept the value of $k$ that minimized the square error on the training set. However, we also noticed that the location of the minimum is not necessarily the same from the perspective of the test data. Ideally, we would like that the designed regression model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as <i>generalization</i>. Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b>
Since using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the regression model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps:
Split the training data into several (generally non-overlapping) subsets. If we use $M$ subsets, the method is referred to as $M$-fold cross-validation. If we consider each pattern a different subset, the method is usually referred to as leave-one-out (LOO) cross-validation.
Carry out the training of the system $M$ times. For each run, use a different partition as a <i>validation</i> set, and use the restating partitions as the training set. Evaluate the performance for different choices of the hyperparameter (i.e., for different values of $k$ for the $k$-NN method).
Average the validation error over all partitions, and pick the hyperparameter that provided the minimum validation error.
Rerun the algorithm using all the training data, keeping the value of the parameter that came out of the cross-validation process.
<img src="https://chrisjmccormick.files.wordpress.com/2013/07/10_fold_cv.png">
End of explanation
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Fabian Pedregosa <fabian.pedregosa@inria.fr>
#
# License: BSD 3 clause (C) INRIA
###############################################################################
# Generate sample data
import numpy as np
import matplotlib.pyplot as plt
from sklearn import neighbors
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
###############################################################################
# Fit regression model
n_neighbors = 5
for i, weights in enumerate(['uniform', 'distance']):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
plt.subplot(2, 1, i + 1)
plt.scatter(X, y, c='k', label='data')
plt.plot(T, y_, c='g', label='prediction')
plt.axis('tight')
plt.legend()
plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors,
weights))
plt.show()
Explanation: Exercise 4
Modify the previous code to use only one of the variables in the input dataset
- Following a cross-validation approach, select the best value of $k$ for the $k$-nn based in variable 0 only.
- Compute the test error for the selected valua of $k$.
6. Scikit-learn implementation
In practice, most well-known machine learning methods are implemented and available for python. Probably, the most complete module for machine learning tools is <a href=http://scikit-learn.org/stable/>Scikit-learn</a>. The following piece of code uses the method
KNeighborsRegressor
available in Scikit-learn. The example has been taken from <a href=http://scikit-learn.org/stable/auto_examples/neighbors/plot_regression.html>here</a>. As you can check, this routine allows us to build the estimation for a particular point using a weighted average of the targets of the neighbors:
To obtain the estimation at a point ${\bf x}$:
Find $k$ closest points to ${\bf x}$ in the training set
Average the corresponding targets, weighting each value according to the distance of each point to ${\bf x}$, so that closer points have a larger influence in the estimation.
End of explanation |
15,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSAL4243
Step1: Plot data
Step2: Train model
Step3: Predict output using trained model
Step4: Plot results
Step5: Do it yourself
Step6: Predict labels using model and print it | Python Code:
import pandas as pd
from sklearn import linear_model
import matplotlib.pyplot as plt
# read data in pandas frame
dataframe = pd.read_csv('datasets/house_dataset1.csv')
# assign x and y
x_feature = dataframe[['Size']]
y_labels = dataframe[['Price']]
# check data by printing first few rows
dataframe.head()
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 2: Linear Regression
Overview
What is Machine Learning?
Definition
The three different types of machine learning
Learning from labled data with supervised learning
Regression for predicting continuous outcomes
Classification for predicting class labels
Unsupervised Learning
Reinforcement Learning
Machine Learning pipeline
Goal of Machine Learning algorithm
# Linear Regression with one variable
Model Representation
Cost Function
Simple case when $\theta_0$ = 0
When both $\theta_0$ and $\theta_1$ can vary
So what is the price of the house?
Read data
Plot data
Train Model
Predict output using trained model
Plot results
Resources
Credits
<br>
<br>
What is Machine Learning? <a name="what-is-ml"></a>
Machine Learning is making computers/machcines learn from data
Learning improve over time with more data
Definition
Mitchell ( 1997 ) define Machine Learning as “A computer
program is said to learn from experience E with respect to some class of tasks T
and performance measure P , if its performance at tasks in T , as measured by P ,
improves with experience E .”
Example: playing checkers.
T = the task of playing checkers.
E = the experience of playing many games of checkers
P = the probability that the program will win the next game.
<br>
<br>
The three different types of machine learning
<img style="float: left;" src="images/01_01.png", width=500>
<br>
<br>
Supervised Learning
<img style="float: left;" src="images/01_02.png", width=500>
<br>
<br>
Regression for predicting continuous outcomes
<img style="float: left;" src="images/01_04.png", width=300> <img style="float: right;" src="images/01_11.png", width=500>
Classification for predicting class labels
<img style="float: left;" src="images/01_03.png", width=300> <img style="float: right;" src="images/01_12.png", width=500>
<br>
<br>
Unsupervised Learning
<img style="float: left;" src="images/01_06.png" width=300>
<br>
<br>
Reinforcement Learning
<img style="float: left;" src="images/01_05.png", width=300>
<br>
<br>
Machine Learning pipeline
<img style="float: left;" src="images/model.png">
x is called input variables or input features.
y is called output or target variable. Also sometimes known as label.
h is called hypothesis or model.
pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
dataset of all training examples is called training set.
m is the number of samples in a dataset.
n is the number of features in a dataset excluding label.
<img style="float: left;" src="images/02_02.png", width=400>
<img style="float: right;" src="images/02_03.png", width=400>
Question ?
What is x<sup>(2)</sup> and y<sup>(2)</sup>?
<br>
<br>
Goal of Machine Learning algorithm
How well the algorithm will perform on unseen data.
Also called generalization.
<br>
<br>
Linear Regression with one variable
Model Representation
Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
<img style="float: left;" src="images/02_01.png">
<img style="float: left;" src="images/02_05.png">
$\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
Question
<img style="float: left;" src="images/02_15.jpg", width=600>
<br>
<br>
<br>
Cost Function
<img style="float: left;" src="images/02_14.png", width=700>
Let $\hat{y}$ = h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
Error in single sample (x,y) = $\hat{y}$ - y = h(x) - y
Cummulative error of all m samples = $\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Finally mean error or cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
<br>
<br>
Simple case when $\theta_0$ = 0
<img style="float: center;" src="images/02_06.png", width=700>
<br>
Question
<img style="float: center;" src="images/02_15.png", width=700>
<br>
<img style="float: center;" src="images/02_07.png", width=700>
<img style="float: center;" src="images/02_08.png", width=700>
<img style="float: center;" src="images/02_09.png", width=700>
<b>
When both $\theta_0$ and $\theta_1$ can vary
<br>
<img style="float: center;" src="images/02_10.png", width=700>
<img style="float: center;" src="images/02_11.png", width=700>
<img style="float: center;" src="images/02_12.png", width=700>
<img style="float: center;" src="images/02_13.png", width=700>
<br>
<br>
So what is the price of the house?
Read data
End of explanation
#visualize results
plt.scatter(x_feature, y_labels)
plt.show()
y_labels.shape
Explanation: Plot data
End of explanation
#train model on data
body_reg = linear_model.LinearRegression()
body_reg.fit(x_feature, y_labels)
print ('theta0 = ',body_reg.intercept_)
print ('theta1 = ',body_reg.coef_)
Explanation: Train model
End of explanation
hx = body_reg.predict(x_feature)
Explanation: Predict output using trained model
End of explanation
plt.scatter(x_feature, y_labels)
plt.plot(x_feature, hx)
plt.show()
Explanation: Plot results
End of explanation
theta0 = 0
theta1 = 0
inc = 1.0
#loop over all values of theta1 from 0 to 1000 with an increment of inc and find cost.
# The one with minimum cost is the answer.
m = x_feature.shape[0]
n = x_feature.shape[1]
# optimal values to be determined
minCost = 100000000000000
optimal_theta = 0
while theta1 < 1000:
cost = 0;
for indx in range(m):
hx = theta1*x_feature.values[indx,0] + theta0
cost += pow((hx - y_labels.values[indx,0]),2)
cost = cost/(2*m)
# print(theta1)
# print(cost)
if cost < minCost:
minCost = cost
optimal_theta = theta1
theta1 += inc
print ('theta0 = ', theta0)
print ('theta1 = ',optimal_theta)
Explanation: Do it yourself
End of explanation
hx = optimal_theta*x_feature
plt.scatter(x_feature, y_labels)
plt.plot(x_feature, hx)
plt.show()
Explanation: Predict labels using model and print it
End of explanation |
15,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CrowdTruth for Binary Choice Tasks
Step1: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Identification task
Step3: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data
Step4: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics
Step5: results is a dict object that contains the quality metrics for the video fragments, annotations and crowd workers.
The video fragment metrics are stored in results["units"]
Step6: The uqs column in results["units"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment. Here we plot its histogram
Step7: The unit_annotation_score column in results["units"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-relation score.
Step8: The worker metrics are stored in results["workers"]
Step9: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
Step10: The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one relation. | Python Code:
import pandas as pd
test_data = pd.read_csv("../data/person-video-binary-choice.csv")
test_data.head()
Explanation: CrowdTruth for Binary Choice Tasks: Person Identification in Video
In this tutorial, we will apply CrowdTruth metrics to a binary choice crowdsourcing task for Person Identification in video fragments. The workers were asked to watch a short video fragment of about 3-5 seconds and then decide whether there is any person that appears in the video fragment. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is a screenshot of the task as it appeared to workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
class TestConfig(DefaultConfig):
inputColumns = ["videolocation", "subtitles", "imagetags", "subtitletags"]
outputColumns = ["selected_answer"]
# processing of a closed task
open_ended_task = False
annotation_vector = ["yes", "no"]
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
return judgments
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Identification task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is a list containing true and false values
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
The complete configuration class is declared below:
End of explanation
data, config = crowdtruth.load(
file = "../data/person-video-binary-choice.csv",
config = TestConfig()
)
data['judgments'].head()
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
results = crowdtruth.run(data, config)
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics:
End of explanation
results["units"].head()
Explanation: results is a dict object that contains the quality metrics for the video fragments, annotations and crowd workers.
The video fragment metrics are stored in results["units"]:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results["units"]["uqs"])
plt.xlabel("Video Fragment Quality Score")
plt.ylabel("Video Fragment")
Explanation: The uqs column in results["units"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment. Here we plot its histogram:
End of explanation
results["units"]["unit_annotation_score"].head()
Explanation: The unit_annotation_score column in results["units"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-relation score.
End of explanation
results["workers"].head()
Explanation: The worker metrics are stored in results["workers"]:
End of explanation
plt.hist(results["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
End of explanation
results["annotations"]
Explanation: The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one relation.
End of explanation |
15,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diamond Quality Analysis
Frame
If you want to buy one of the best diamonds in the world, what are the different aspects you want to look at? Let's find out how a stone is turned into a precious gem.
It is the 4C's that differentiates each stone
Carat
Cut
Clarity
Colour
Step1: Acquire
Step2: Carat
The carat weight measures the mass of a diamond. One carat is defined as 200 milligrams (about 0.007 ounce avoirdupois). The point unit—equal to one one-hundredth of a carat (0.01 carat, or 2 mg)—is commonly used for diamonds of less than one carat. All else being equal, the price per carat increases with carat weight, since larger diamonds are both rarer and more desirable for use as gemstones.
Step3: Cut
Diamond cutting is the art and science of creating a gem-quality diamond out of mined rough. The cut of a diamond describes the manner in which a diamond has been shaped and polished from its beginning form as a rough stone to its final gem proportions. The cut of a diamond describes the quality of workmanship and the angles to which a diamond is cut. Often diamond cut is confused with "shape".
Step4: Clarity
Diamond clarity is a quality of diamonds relating to the existence and visual appearance of internal characteristics of a diamond called inclusions, and surface defects called blemishes. Inclusions may be crystals of a foreign material or another diamond crystal, or structural imperfections such as tiny cracks that can appear whitish or cloudy. The number, size, color, relative location, orientation, and visibility of inclusions can all affect the relative clarity of a diamond. A clarity grade is assigned based on the overall appearance of the stone under ten times magnification.
Step5: Colour
The finest quality as per color grading is totally colorless, which is graded as "D" color diamond across the globe, meaning it is absolutely free from any color. The next grade has a very slight trace of color, which can be observed by any expert diamond valuer/grading laboratory. However when studded in jewellery these very light colored diamonds do not show any color or it is not possible to make out color shades. These are graded as E color or F color diamonds.
Refine
To perform any kind of visual exploration, we will need a numeric values for the cut categories. We need to create a new column with numeric values the cut categories
Step6: We need to convert values in color column to numeric values.
Step7: Exercise
Create a column clarity_num with clarity as numeric value
What is the highest price of D colour diamonds?
Visual Exploration
We start with histograms and in histograms, the number of bins is an important parameter
Step8: Ploting the price after log transformation
Transformation are done to address skewness in the distribution of a variable.
Step9: Exercise
Plot bar charts for colour_num, cut_num, clarity_num
Two variable exploration
Step10: Exercise
Plot the price against carat after tranformation
Exercise
Step11: Linear Regression
Simple Linear Regression
Simple linear regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable"). It takes the following form
Step12: sklearn expects the dimension of the features to be a 2d array
Step13: Train the model
Step14: co-efficient - β1
Step15: intercept - β0
Step16: Interpreting Model Coefficients
How do we interpret the carat coefficient (β1)?
Increase in carat is associated with a 1.67581673 increase in Sales.
Note that if an increase in carat was associated with a decrease in price, β1 would be negative.
Using the Model for Prediction
y=β0+β1x
Step17: How Well Does the Model Fit the data?
The most common way to evaluate the overall fit of a linear model is by the R-squared value. R-squared is the proportion of variance explained, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the null model. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.)
R-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. Here's an example of what R-squared "looks like"
Step18: Using multi variable for regression
Step19: We need to do one hot encoding on cut
Step20: Concatenate the two dataframes
Step21: To run the model, we drop one of the dummy variables | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (13,8)
Explanation: Diamond Quality Analysis
Frame
If you want to buy one of the best diamonds in the world, what are the different aspects you want to look at? Let's find out how a stone is turned into a precious gem.
It is the 4C's that differentiates each stone
Carat
Cut
Clarity
Colour
End of explanation
df = pd.read_csv("./diamonds.csv")
df.head()
Explanation: Acquire
End of explanation
df.carat.describe()
Explanation: Carat
The carat weight measures the mass of a diamond. One carat is defined as 200 milligrams (about 0.007 ounce avoirdupois). The point unit—equal to one one-hundredth of a carat (0.01 carat, or 2 mg)—is commonly used for diamonds of less than one carat. All else being equal, the price per carat increases with carat weight, since larger diamonds are both rarer and more desirable for use as gemstones.
End of explanation
df.cut.unique()
Explanation: Cut
Diamond cutting is the art and science of creating a gem-quality diamond out of mined rough. The cut of a diamond describes the manner in which a diamond has been shaped and polished from its beginning form as a rough stone to its final gem proportions. The cut of a diamond describes the quality of workmanship and the angles to which a diamond is cut. Often diamond cut is confused with "shape".
End of explanation
df.clarity.unique()
Explanation: Clarity
Diamond clarity is a quality of diamonds relating to the existence and visual appearance of internal characteristics of a diamond called inclusions, and surface defects called blemishes. Inclusions may be crystals of a foreign material or another diamond crystal, or structural imperfections such as tiny cracks that can appear whitish or cloudy. The number, size, color, relative location, orientation, and visibility of inclusions can all affect the relative clarity of a diamond. A clarity grade is assigned based on the overall appearance of the stone under ten times magnification.
End of explanation
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
df["cut_num"] = encoder.fit_transform(df.cut)
df.head()
df.color.unique()
Explanation: Colour
The finest quality as per color grading is totally colorless, which is graded as "D" color diamond across the globe, meaning it is absolutely free from any color. The next grade has a very slight trace of color, which can be observed by any expert diamond valuer/grading laboratory. However when studded in jewellery these very light colored diamonds do not show any color or it is not possible to make out color shades. These are graded as E color or F color diamonds.
Refine
To perform any kind of visual exploration, we will need a numeric values for the cut categories. We need to create a new column with numeric values the cut categories
End of explanation
encoder = LabelEncoder()
df["color_num"] = encoder.fit_transform(df.color)
df.head(20)
Explanation: We need to convert values in color column to numeric values.
End of explanation
df.price.hist(bins=100)
Explanation: Exercise
Create a column clarity_num with clarity as numeric value
What is the highest price of D colour diamonds?
Visual Exploration
We start with histograms and in histograms, the number of bins is an important parameter
End of explanation
df["price_log"] = np.log10(df.price)
df.price_log.hist(bins=100)
Explanation: Ploting the price after log transformation
Transformation are done to address skewness in the distribution of a variable.
End of explanation
df.plot(x="carat", y="price", kind="scatter")
Explanation: Exercise
Plot bar charts for colour_num, cut_num, clarity_num
Two variable exploration
End of explanation
df[df.cut_num == 0].plot(x="carat_log", y="price_log", kind="scatter", color="red", label="Fair")
ax = df[df.cut_num == 0].plot(x="carat_log", y="price_log", kind="scatter", color="red", label="Fair")
df[df.cut_num == 1].plot(x="carat_log", y="price_log", kind="scatter", color="green", label="Good", ax=ax)
Explanation: Exercise
Plot the price against carat after tranformation
Exercise:
Plot the log transforms for the other C's against price
Compare how the price change with respect to carat and colour of a diamond
End of explanation
from sklearn.linear_model import LinearRegression
linear_model = LinearRegression()
Explanation: Linear Regression
Simple Linear Regression
Simple linear regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable"). It takes the following form:
y=β0+β1x
What does each term represent?
y is the response
x is the feature
β0 is the intercept
β1 is the coefficient for x
Together, β0 and β1 are called the model coefficients. To create your model, you must "learn" the values of these coefficients. And once we've learned these coefficients, we can use the model to predict Price!
Estimating ("Learning") Model Coefficients
Generally speaking, coefficients are estimated using the least squares criterion, which means we are find the line (mathematically) which minimizes the sum of squared residuals (or "sum of squared errors"):
What elements are present in the diagram?
The black dots are the observed values of x and y.
The blue line is our least squares line.
The red lines are the residuals, which are the distances between the observed values and the least squares line.
How do the model coefficients relate to the least squares line?
* β0 is the intercept (the value of y when x=0)
* β1 is the slope (the change in y divided by change in x)
Here is a graphical depiction of those calculations:
Building the linear model
End of explanation
X_train = df["carat_log"]
X_train.shape
X_train = X_train.reshape(X_train.shape[0],1)
X_train.shape
y_train = df["price_log"]
Explanation: sklearn expects the dimension of the features to be a 2d array
End of explanation
linear_model.fit(X_train, y_train)
Explanation: Train the model
End of explanation
linear_model.coef_
Explanation: co-efficient - β1
End of explanation
linear_model.intercept_
Explanation: intercept - β0
End of explanation
df[['carat_log','price_log']].head()
df[['carat_log','price_log']].tail()
X_test = pd.Series([df.carat_log.min(), df.carat_log.max()])
X_test = X_test.reshape(X_test.shape[0],1)
predicted = linear_model.predict(X_test)
print predicted
# first, plot the observed data
df.plot(kind='scatter', x='carat_log', y='price_log')
# then, plot the least squares line
plt.plot(X_test, predicted, c='red', linewidth=2)
Explanation: Interpreting Model Coefficients
How do we interpret the carat coefficient (β1)?
Increase in carat is associated with a 1.67581673 increase in Sales.
Note that if an increase in carat was associated with a decrease in price, β1 would be negative.
Using the Model for Prediction
y=β0+β1x
End of explanation
test_samples = pd.Series(predicted)
test_samples = test_samples.reshape(test_samples.shape[0],1)
true_values = pd.Series([3.440437, 2.513218])
true_values = true_values.reshape(true_values.shape[0],1)
linear_model.score(X_train, df["price_log"].reshape(df["price_log"].shape[0],1))
linear_model.coef_
linear_model.intercept_
Explanation: How Well Does the Model Fit the data?
The most common way to evaluate the overall fit of a linear model is by the R-squared value. R-squared is the proportion of variance explained, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the null model. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.)
R-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. Here's an example of what R-squared "looks like":
Goodness of fit - R2 score
End of explanation
df.head()
Explanation: Using multi variable for regression
End of explanation
df_encoded = pd.get_dummies(df["cut_num"])
df_encoded.columns = ['cut_num0', 'cut_num1', 'cut_num2', 'cut_num3', 'cut_num4']
df_encoded.head()
df.head()
Explanation: We need to do one hot encoding on cut
End of explanation
frames = [df, df_encoded]
df2 = pd.concat(frames, axis=1)
df2.head()
Explanation: Concatenate the two dataframes
End of explanation
X4_train = df2[["carat_log", "cut_num0", "cut_num1", "cut_num2", "cut_num3"]]
X4_train.head()
y4_train = df2["price_log"]
y4_train.head()
X4_train.shape
linear_model2 = LinearRegression()
linear_model2.fit(X4_train, y4_train)
linear_model2.coef_
pd.set_option("precision", 4)
pd.DataFrame(linear_model2.coef_)
linear_model2.intercept_
X4_test = pd.Series([-0.508638, 0, 1, 0, 0])
X4_test
X4_test.shape
linear_model2.predict(X4_test.reshape(1,5))
predicted = linear_model2.predict(X4_test)
print predicted
y_train[4]
true_values = pd.Series([y_train[4]])
X4_test
X4_test.shape
X4_test = X4_test.reshape(1,5)
X4_test
true_values
linear_model2.score(X4_test, true_values)
linear_model2.coef_
linear_model.intercept_
Explanation: To run the model, we drop one of the dummy variables
End of explanation |
15,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nested Statements and Scope
Now that we have gone over on writing our own functions, its important to understand how Python deals with the variable names you assign. When you create a variable name in Python the name is stored in a namespace. Variable names also have a scope, the scope determines the visbility of that variable name to other parts of your code.
Lets start with a quick thought experiment, imagine the following code
Step1: What do you imagine the output of printer() is? 25 or 50? What is the output of print x? 25 or 50?
Step2: Interesting! But how does Python know which x you're referring to in your code? This is where the idea of scope comes in. Python has a set of rules it follows to decide what variables (such as x in this case) you are refrencing in your code. Lets break down the rules
Step3: Enclosing function locals
This occurs when we have a function inside a function (nested functions)
Step4: Note how Sammy was used, because the hello() function was enclosed inside of the greet function!
Global
Luckily in Jupyter a quick way to test for global variables is to see if another cell recognizes the variable!
Step5: Built-in
These are the built-in function names in Python (don't overwrite these!)
Step6: Local Variables
When you declare variables inside a function definition, they are not related in any way to other variables with the same names used outside the function - i.e. variable names are local to the function. This is called the scope of the variable. All variables have the scope of the block they are declared in starting from the point of definition of the name.
Example
Step7: The first time that we print the value of the name x with the first line in the function’s body, Python uses the value of the parameter declared in the main block, above the function definition.
Next, we assign the value 2 to x. The name x is local to our function. So, when we change the value of x in the function, the x defined in the main block remains unaffected.
With the last print statement, we display the value of x as defined in the main block, thereby confirming that it is actually unaffected by the local assignment within the previously called function.
The global statement
If you want to assign a value to a name defined at the top level of the program (i.e. not inside any kind of scope such as functions or classes), then you have to tell Python that the name is not local, but it is global. We do this using the global statement. It is impossible to assign a value to a variable defined outside a function without the global statement.
You can use the values of such variables defined outside the function (assuming there is no variable with the same name within the function). However, this is not encouraged and should be avoided since it becomes unclear to the reader of the program as to where that variable’s definition is. Using the global statement makes it amply clear that the variable is defined in an outermost block.
Example | Python Code:
x = 25
def printer():
x = 50
return x
print x
print printer()
Explanation: Nested Statements and Scope
Now that we have gone over on writing our own functions, its important to understand how Python deals with the variable names you assign. When you create a variable name in Python the name is stored in a namespace. Variable names also have a scope, the scope determines the visbility of that variable name to other parts of your code.
Lets start with a quick thought experiment, imagine the following code:
End of explanation
print x
print printer()
Explanation: What do you imagine the output of printer() is? 25 or 50? What is the output of print x? 25 or 50?
End of explanation
# x is local here:
f = lambda x:x**2
Explanation: Interesting! But how does Python know which x you're referring to in your code? This is where the idea of scope comes in. Python has a set of rules it follows to decide what variables (such as x in this case) you are refrencing in your code. Lets break down the rules:
This idea of scope in your code is very important to understand in order to properly assign and call variable names.
In simple terms, the idea of scope can be described by 3 general rules:
Name assignments will create or change local names by default.
Name references search (at most) four scopes, these are:
local
enclosing functions
global
built-in
Names declared in global and nonlocal statements map assigned names to enclosing module and function scopes.
The statement in #2 above can be defined by the LEGB rule.
LEGB Rule.
L: Local — Names assigned in any way within a function (def or lambda)), and not declared global in that function.
E: Enclosing function locals — Name in the local scope of any and all enclosing functions (def or lambda), from inner to outer.
G: Global (module) — Names assigned at the top-level of a module file, or declared global in a def within the file.
B: Built-in (Python) — Names preassigned in the built-in names module : open,range,SyntaxError,...
Quick examples of LEGB
Local
End of explanation
name = 'This is a global name'
def greet():
# Enclosing function
name = 'Sammy'
def hello():
print 'Hello '+name
hello()
greet()
Explanation: Enclosing function locals
This occurs when we have a function inside a function (nested functions)
End of explanation
print name
Explanation: Note how Sammy was used, because the hello() function was enclosed inside of the greet function!
Global
Luckily in Jupyter a quick way to test for global variables is to see if another cell recognizes the variable!
End of explanation
len
Explanation: Built-in
These are the built-in function names in Python (don't overwrite these!)
End of explanation
x = 50
def func(x):
print 'x is', x
x = 2
print 'Changed local x to', x
func(x)
print 'x is still', x
Explanation: Local Variables
When you declare variables inside a function definition, they are not related in any way to other variables with the same names used outside the function - i.e. variable names are local to the function. This is called the scope of the variable. All variables have the scope of the block they are declared in starting from the point of definition of the name.
Example:
End of explanation
x = 50
def func():
global x
print 'This function is now using the global x!'
print 'Because of global x is: ', x
x = 2
print 'Ran func(), changed global x to', x
print 'Before calling func(), x is: ', x
func()
print 'Value of x (outside of func()) is: ', x
Explanation: The first time that we print the value of the name x with the first line in the function’s body, Python uses the value of the parameter declared in the main block, above the function definition.
Next, we assign the value 2 to x. The name x is local to our function. So, when we change the value of x in the function, the x defined in the main block remains unaffected.
With the last print statement, we display the value of x as defined in the main block, thereby confirming that it is actually unaffected by the local assignment within the previously called function.
The global statement
If you want to assign a value to a name defined at the top level of the program (i.e. not inside any kind of scope such as functions or classes), then you have to tell Python that the name is not local, but it is global. We do this using the global statement. It is impossible to assign a value to a variable defined outside a function without the global statement.
You can use the values of such variables defined outside the function (assuming there is no variable with the same name within the function). However, this is not encouraged and should be avoided since it becomes unclear to the reader of the program as to where that variable’s definition is. Using the global statement makes it amply clear that the variable is defined in an outermost block.
Example:
End of explanation |
15,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting World Series Winners
Fall 2016
Jack Limongelli (jal839@stern.nyu.edu)
Introduction
Baseball is America's pasttime. It began in 1846 when the Carwright Knickerbockers lost to the New York Baseball Club in Hoboken, New Jersey. Fast foward eighty-one years, the 1927 Yankees capture the hearts of millions as they win the team's second World Series title and cement themselves as the single greatest team in the history of baseball. Figures like Babe Ruth and Lou Gehrig become popular culture icons, removing baseball from its wealthy-aristocratic origins and thrusting it to the masses. The game has evolved greatly since that historic year. Top tier players are now paid up to $30 million a year in the hope that they will help capture the Commissioner's Trophy. That being said, in 112 total World Series, six teams have managed to win 62 percent of championships and one team, the New York Yankees, have one 24 percent. The following question arises
Step1: I importated pandas in order to read the excel spreadsheets that contain my data. Matplotlib will be used to create graphs, charts, and any other figure that helps describe my data.
Data
I collected my data from the following sources
Step2: Create Lists for Graphs
Because we are only concerned with World Series winners, we need to remove the data from each table that does not to belong to the winner of that season.
Step3: Side note for following data
Because the variation of results in the next three data sets is much greater than the preceeding ranking, I divided all points by 10, creating "boxes" of values that fall between 2 numbers (6 is 6 and 7 for example). I continued this process for the remanding data sets, changing little but the ranges.
Step4: Analyzing the Data
Runs Scored
Of all World Series Winners, 63% have been ranked number 1 or number 2 in runs scored. That being said, the line graph suggests that teams ranked higher have had increasingly better odds of winning as the years have passed. One possible explaination for that claim is the large expansion that occured in the 1960's. More teams simply meant there were more non 1 and 2 rankings
Runs Scored Against
The results for runs scored against are very similar | Python Code:
# Packages
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Predicting World Series Winners
Fall 2016
Jack Limongelli (jal839@stern.nyu.edu)
Introduction
Baseball is America's pasttime. It began in 1846 when the Carwright Knickerbockers lost to the New York Baseball Club in Hoboken, New Jersey. Fast foward eighty-one years, the 1927 Yankees capture the hearts of millions as they win the team's second World Series title and cement themselves as the single greatest team in the history of baseball. Figures like Babe Ruth and Lou Gehrig become popular culture icons, removing baseball from its wealthy-aristocratic origins and thrusting it to the masses. The game has evolved greatly since that historic year. Top tier players are now paid up to $30 million a year in the hope that they will help capture the Commissioner's Trophy. That being said, in 112 total World Series, six teams have managed to win 62 percent of championships and one team, the New York Yankees, have one 24 percent. The following question arises: Is there a formula for winning? This project attempts to answer that question.
Winning
There are two sides of baseball: hitting and pitching. Obviously, a team can't win games without scoring runs, but the oldest saying is that "defense wins championships." I will consider a teams ranking, relative to the other teams, in runs scored and runs allowed. Extending from those two categories, I will look at the team's regular season record as a percentage of total games won in order to account for the changing regular season length throughout history. Because these three categories are inhernently fundamental to the winning formula, I will also include the payroll of each team and the number of years since the team's previous World Series win. While the latter may not be as infomative historically bad teams, it may offer insight into the six teams mentioned above.
End of explanation
#import wins
wins = '/Users/Jack/Desktop/Final/Wins.csv'
WinsTable = pd.read_csv(wins)
WinsTable
#import Runs Against
RunsAgainst = '/Users/Jack/Desktop/Final/Runs Against.csv'
RunsAgainstTable = pd.read_csv(RunsAgainst)
RunsAgainstTable
#import Runs
Runs = '/Users/Jack/Desktop/Final/Runs.csv'
RunsTable = pd.read_csv(Runs)
RunsTable
#import Payroll
Payroll = '/Users/Jack/Desktop/Final/Payroll.csv'
PayrollTable = pd.read_csv(Payroll)
PayrollTable
# Payroll data is in % of league average
#import Last WS
LastWS = '/Users/Jack/Desktop/Final/Last WS.csv'
LastWSTable = pd.read_csv(LastWS)
LastWSTable
Explanation: I importated pandas in order to read the excel spreadsheets that contain my data. Matplotlib will be used to create graphs, charts, and any other figure that helps describe my data.
Data
I collected my data from the following sources: http://www.baseball-reference.com, http://www.thebaseballcube.com/extras/payrolls/, http://www.espn.com/mlb/worldseries/history/winners. The first championship offically called the World Series was held in 1903 so that is my starting year. Unforunately, data regarding pay-roll only began to be tracked in 1988. However, it may still be useful considering the game today is more heavily based on signing free-agents rather than developing young talent as it was in the past. Because many teams have been added throughout the years, many teams will have either "0" or "NaN" for certain years.
End of explanation
# Create new dataframe with just the Year and Winner columns from the original
RunsTableWinner = RunsTable[['Year','Winner']]
# Convert data frame to a list of just the names of the teams that won each world series
RunsTableWinner2 = (RunsTableWinner.loc[0:113, 'Winner'])
RunsTableWinner3 = list(RunsTableWinner2)
# Locate and store the value corresponding to each winner for a given year.
# A loop makes the process much easier given the 113 year.
RunsTableWinner4 = []
for y in range(0,114):
RunsTableWinner4.append(RunsTable.loc[y,RunsTableWinner3[y]])
# When the loop data was stored in RunsTableWinner4, the string was repeated several times.
# To erase the excess data, just the first 115 entries were stored in RunsTableWinner5.
RunsTableWinner5 = RunsTableWinner4[0:114]
# Erase years when the value was 0: no world series occured.
# This occured in 1904 and 1994 due to business disaggrements between the American League and Naitonal League and
# a players strike, respectively.
RunsTableWinner6 = [x for x in RunsTableWinner5 if x != 0]
print(RunsTableWinner6)
# The following cells are the exact same process but for the other data sets.
# All explanations and code are the exact same. Only the names of variables will change.
RunsAgainstTableWinner = RunsAgainstTable[['Year','Winner']]
RunsAgainstTableWinner2 = (RunsAgainstTableWinner.loc[0:113, 'Winner'])
RunsAgainstTableWinner3 = list(RunsAgainstTableWinner2)
RunsAgainstTableWinner4 = []
for y in range(0,114):
RunsAgainstTableWinner4.append(RunsAgainstTable.loc[y,RunsAgainstTableWinner3[y]])
RunsAgainstTableWinner5 = RunsAgainstTableWinner4[0:114]
RunsAgainstTableWinner6 = [x for x in RunsAgainstTableWinner5 if x != 0]
print(RunsAgainstTableWinner6)
WinsTableWinner = WinsTable[['Year','Winner']]
WinsTableWinner2 = (WinsTableWinner.loc[0:113, 'Winner'])
WinsTableWinner3 = list(WinsTableWinner2)
WinsTableWinner4 = []
for y in range(0,114):
WinsTableWinner4.append(WinsTable.loc[y,WinsTableWinner3[y]])
WinsTableWinner5 = WinsTableWinner4[0:114]
WinsTableWinner6 = [x for x in WinsTableWinner5 if x != 0]
print(WinsTableWinner6)
PayrollTableWinner = PayrollTable[['Year','Winner']]
PayrollTableWinner2 = (PayrollTableWinner.loc[0:113, 'Winner'])
PayrollTableWinner3 = list(PayrollTableWinner2)
# The range is smaller because the only goes back to 1988
PayrollTableWinner4 = []
for y in range(0,28):
PayrollTableWinner4.append(PayrollTable.loc[y,PayrollTableWinner3[y]])
PayrollTableWinner5 = PayrollTableWinner4[0:28]
PayrollTableWinner6 = [x for x in PayrollTableWinner5 if x != 0]
print(PayrollTableWinner6)
LastWSTableWinner = LastWSTable[['Year','Winner']]
LastWSTableWinner2 = (LastWSTableWinner.loc[0:113, 'Winner'])
LastWSTableWinner3 = list(LastWSTableWinner2)
LastWSTableWinner4 = []
for y in range(0,114):
LastWSTableWinner4.append(LastWSTable.loc[y,LastWSTableWinner3[y]])
LastWSTableWinner5 = LastWSTableWinner4[0:114]
# To remove the 1904 and 1994 values with no winner I could not just delete zero values because there are many years
# where the same team won consequtively: the second year would have a value of zero years since the teams last win.
# I created 3 individual lists, which excluded 1904 and 1994 and then combined them into one list
a = LastWSTableWinner5[0:1]
b = LastWSTableWinner5[2:91]
c = LastWSTableWinner5[92:114]
LastWSTableWinner6 = a + b + c
# m will be used as the x values
m = range(0,112)
# sets size of the figure
plt.figure(figsize=(15,5))
# plotting of the figure
plt.plot(m, RunsTableWinner6)
# x axis range
plt.xlim(0,113)
# x axis label
plt.xlabel("Year's Since 1903")
# y axis range
plt.ylim(0,15)
# y axis label
plt.ylabel('Runs Scored Ranking')
# plot title
plt.title('Runs Scored Ranking of World Series Winner Over Time')
# n will be used for x values
n = list(range(1,16))
# creates list of frequencis of each value in range(1,16)
RunsCount = []
for l in range(1,16):
RunsCount.append(RunsTableWinner6.count(l))
# sets size of chart
plt.figure(figsize=(15,5))
# plots graph
pyplot.bar(n, RunsCount, align='center')
# x axis range
plt.xlim(0,15)
# x axis label
plt.xlabel('Ranking')
# y axis range
plt.ylim(0,50)
# y axis label
plt.ylabel('Frequency')
# plot title
plt.title('Frequency of Runs Scored Ranking by World Series Winners')
# The following cells are the exact same process but for the other data sets.
# All explanations and code are the exact same. Only the names of variables will change.
m = range(0,112)
plt.figure(figsize=(15,5))
plt.plot(m, RunsAgainstTableWinner6)
plt.xlim(0,113)
plt.xlabel("Year's Since 1903")
plt.ylim(0,15)
plt.ylabel('Runs Against Ranking')
plt.title('Runs Against Ranking of World Series Winner Over Time')
n = list(range(1,16))
RunsAgainstCount = []
for l in range(1,16):
RunsAgainstCount.append(RunsAgainstTableWinner6.count(l))
plt.figure(figsize=(15,5))
pyplot.bar(n, RunsAgainstCount, align='center')
plt.xlim(0,15)
plt.xlabel('Ranking')
plt.ylim(0,50)
plt.ylabel('Frequency')
plt.title('Frequency of Runs Against Ranking by World Series Winners')
Explanation: Create Lists for Graphs
Because we are only concerned with World Series winners, we need to remove the data from each table that does not to belong to the winner of that season.
End of explanation
m = range(0,112)
plt.figure(figsize=(15,5))
plt.plot(m, WinsTableWinner6)
plt.xlim(0,113)
plt.xlabel("Year's Since 1903")
plt.ylim(40,80)
plt.ylabel('Percent of Games Won')
plt.title('Win Percentage of World Series Winner Over Time')
# Process described above
q = range(3,9)
WinsCountDivide = list((int(x/10) for x in WinsTableWinner6))
WinsCount = []
for f in range(3,9):
WinsCount.append(WinsCountDivide.count(f))
plt.figure(figsize=(15,5))
pyplot.bar(q, WinsCount, align='center')
plt.xlim(3,8)
plt.xlabel('Win Percentage, measuered in 10% points')
plt.ylim(0,80)
plt.ylabel('Frequency')
plt.title('Frequency of Win Percentage by World Series Winners')
m = range(0,112)
plt.figure(figsize=(15,5))
plt.plot(m, LastWSTableWinner6)
plt.xlim(0,113)
plt.xlabel("Year's Since 1903")
plt.ylim(0,108)
plt.ylabel('Years Since Last World Series Win')
plt.title('Years Since Last World Series Win of World Series Winner Over Time')
t = range(0,12)
LastWSCountDivide = list((int(x/10) for x in LastWSTableWinner6))
LastWSCount = []
for f in range(0,12):
LastWSCount.append(LastWSCountDivide.count(f))
plt.figure(figsize=(15,5))
pyplot.bar(t, LastWSCount, align='center')
plt.xlim(-1,11)
plt.xlabel('Years Since Last World Series Win, measuered in 10 years')
plt.ylim(0,70)
plt.ylabel('Frequency')
plt.title('Frequency of Years Since Last World Series Win by World Series Winners')
m = range(0,27)
plt.figure(figsize=(15,5))
plt.plot(m, PayrollTableWinner6)
plt.xlim(0,27)
plt.xlabel("Year's Since 1988")
plt.ylim(50,230)
plt.ylabel('Percentage of League Average Payroll')
plt.title('Percentage of League Average Payroll of World Series Winner Over Time')
t = range(6,23)
PayrollCountDivide = list((int(x/10) for x in PayrollTableWinner6))
PayrollCount = []
for f in range(6,23):
PayrollCount.append(PayrollCountDivide.count(f))
plt.figure(figsize=(15,5))
pyplot.bar(t, PayrollCount, align='center')
plt.xlim(5,23)
plt.xlabel('Payroll Percentage of League Average measuered in 10 % points')
plt.ylim(0,5)
plt.ylabel('Frequency')
plt.title('Frequency of Payroll Percentage of League Average by World Series Winners')
Explanation: Side note for following data
Because the variation of results in the next three data sets is much greater than the preceeding ranking, I divided all points by 10, creating "boxes" of values that fall between 2 numbers (6 is 6 and 7 for example). I continued this process for the remanding data sets, changing little but the ranges.
End of explanation
# plotting both lines together
m = range(0,112)
plt.figure(figsize=(20,5))
plt.plot(m, RunsTableWinner6, 'r') # r changes the color
plt.plot(m, RunsAgainstTableWinner6)
plt.xlim(0,113)
plt.xlabel("Year's Since 1988")
plt.ylim(0,14)
plt.ylabel('Ranking')
plt.title('Runs Scored and Runs Allowed Ranking of World Series Winner Over Time')
# Runs Scored is Red, Runs Scored Against is Blue
Explanation: Analyzing the Data
Runs Scored
Of all World Series Winners, 63% have been ranked number 1 or number 2 in runs scored. That being said, the line graph suggests that teams ranked higher have had increasingly better odds of winning as the years have passed. One possible explaination for that claim is the large expansion that occured in the 1960's. More teams simply meant there were more non 1 and 2 rankings
Runs Scored Against
The results for runs scored against are very similar: 68% of winners have been ranked number 1 or 2, with more variation in recent years. I thought it would be interesting to plot both graphs on the same set of axis.
End of explanation |
15,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Summary Statistics - Exercises
In these exercises you'll use a real life medical dataset to learn how to obtain basic statistics from the data. This dataset comes from Gluegrant, an American project that aims to find a which genes are more important for the recovery of severely injured patients! It was sightly edited to remove some complexities, but if you wish to check it out in it's full glory, it's available on the website and I can show it to you!
Have fun being a biostaticist for 1 hour!
Step2: Exercise 1 - Lets get a quick look at the groups
Ok, first lets get a quick look at who is in each of the groups. In medical studies it's important that the control and patient groups aren't too different from each other, so that we can draw relevant results.
Separate the patients and control into 2 dataframes
Since we are going to perform multiple statistics on the patient and control groups, we should create a variable for each one of the groups, so that we mantain our code readable!
Remember
Step3: Find out the Age means for each of the groups
Remember
Step4: Find the Median of each group
As seen on the presentation, the mean can affected by outliers on the data, lets check that out with the median.
Remember
Step5: Results - Mean / Median
Is there a significant difference of the mean and median?
(Optional)
Step6: Find the quantiles
Let's use the quantiles to obtain the dispersion of the groups. Get the 0, 0.25, 0.5, 0.75 and 1 quantiles.
Remember
Step7: Results - Interval Statistics
(Options)
Step8: Number of male control patients
Step9: Results - Percentage of the sexes
(Optional)
Step10: Gene 2
Step11: I will just ask for one more gene, since the process is entirely the same!
Gene 6
Step12: Results - Genes 1, 2 and 6
Of the 3 genes, which ones do you believe are involved in the process of recovery?
Help
Step13: What if we want the a measure of difference for each gene? | Python Code:
import pandas as pd
import numpy as np
from IPython.display import display, HTML
CSS =
.output {
flex-direction: row;
}
patient_data = pd.read_csv("../data/Exercises_Summary_Statistics_Data.csv")
patient_data.head()
Explanation: Summary Statistics - Exercises
In these exercises you'll use a real life medical dataset to learn how to obtain basic statistics from the data. This dataset comes from Gluegrant, an American project that aims to find a which genes are more important for the recovery of severely injured patients! It was sightly edited to remove some complexities, but if you wish to check it out in it's full glory, it's available on the website and I can show it to you!
Have fun being a biostaticist for 1 hour! :)
Objectives
In this exercise the objective if for you to learn how to use Pandas functions to obtain simple statistics of Datasets.
Dataset information
The dataset is a medical dataset with 184 patients, distributed into 2 test groups where each group divided in 2, patients and control.
The dataset is composed of clinical values:
* Patient.id
* Age
* Sex
* Group (to what group they belong)
* Results (outcome of the patient)
and of the gene expression (higher = more expressed):
* Gene1: MMP9
* Gene2: S100A12
* Gene3: MCEMP1
* Gene4: ACSL1
* Gene5: SLC7A2
* Gene6: CDC14B
Remarks for people without bio background
Don't worry if you are not from a biological background, consider that these genes are simply numeric values related to the patient. We will not delve into the biological meaning of any of the genes, we'll only try to find if there are differences between the gene values for the different groups!
If you are still not confortable using this dataset, imagine this situation instead:
You can consider that this dataset comes from a online shopping service like Amazon. Imagine that they were conducting an A/B test, where a small part of their website was changed, like the related items suggestions. You have 2 groups, the "control group" that is the group that is experiencing the original website (without modifications) and Group 1 that is using the website with the new suggestions.
Consider also that the genes are products or product categories where the customers buy a certain ammount of products. Your objective now is to find if there is a significant difference between the control group and Group 1.
Start
Ok, introductions aside, please have fun being a biostaticist for 45 minutes! :P Any doubt, please call me or any of the other professors!
Import Data
End of explanation
patients = # Subtet patient_data to include only patients
control = # Subset patient_data to include only control
Explanation: Exercise 1 - Lets get a quick look at the groups
Ok, first lets get a quick look at who is in each of the groups. In medical studies it's important that the control and patient groups aren't too different from each other, so that we can draw relevant results.
Separate the patients and control into 2 dataframes
Since we are going to perform multiple statistics on the patient and control groups, we should create a variable for each one of the groups, so that we mantain our code readable!
Remember: If you want to subset the comand is: Name_of_dataframe[Name_of_dataframe.column == "Value"]
End of explanation
patient_mean = # Calculate the mean of the patients
control_mean = # Calculate the mean of the controls
print("The patient mean age is:", patient_mean, "and the control mean age is:", control_mean, "\t")
Explanation: Find out the Age means for each of the groups
Remember: To find the mean of a dataframe column, just use Name_of_dataframe.column.mean()
End of explanation
patient_median = # Calculate the median of the patients
control_median = # Calculate the median of the controls
print("The patient median age is:", patient_median, "and the control median age is:", control_median, "\t")
Explanation: Find the Median of each group
As seen on the presentation, the mean can affected by outliers on the data, lets check that out with the median.
Remember: To find the median of a dataframecolumn, just use Name_of_dataframe.column.median()
End of explanation
patient_std = # Standard Deviation of the patients
control_std = # Standard Deviation of the controls
print("The patient std is:", patient_std, "and the control std is:", control_std, "\t")
Explanation: Results - Mean / Median
Is there a significant difference of the mean and median?
(Optional): Is there a significant difference between the age of the Patients and Control? Consider that this dataset is composed mainly of people injured using powertools or other type of machinery, therefore, it's composed mainly of people in working age 20-ish to 60-ish.
Find the Standard deviation of each group
Let's see if there is a large deviation from the mean in each of the groups.
Remember: The standard deviation is taken as Name_of_dataframe.column.std()
End of explanation
patient_quantiles = # Patient quantiles
control_quantiles = # Control quantiles
print("Patients:\f")
display(pd.DataFrame(patient_quantiles))
print("Control:\f")
display(pd.DataFrame(control_quantiles))
HTML('<style>{}</style>'.format(CSS))
Explanation: Find the quantiles
Let's use the quantiles to obtain the dispersion of the groups. Get the 0, 0.25, 0.5, 0.75 and 1 quantiles.
Remember: The quantiles are obtained using the comand Name_of_dataframe.column.quantile(q=[percentages])
End of explanation
num_male_patients = # Get the number of male patients
num_female_patients = # Get the number of female patients
print("The number of male patients is:", num_male_patients, \
"\nThe number of female patients is:", num_female_patients, \
"\nAnd the percentage of males is:", num_male_patients / (num_male_patients + num_female_patients), "\t")
Explanation: Results - Interval Statistics
(Options): Do the dispersion statistics show a significant difference in the dispersion of the data?
Find out how many patients are male and how many are female
Next, let's try to find out the number or each of the sexes and the prercantage of males in each of the groups.
Remember: To get a frequency table, use Name_of_dataframe.column.value_counts(). To one way to get the number of a certain group do Name_of_dataframe.column.value_counts()["name_of_group"]
Number of male patients
End of explanation
num_male_control = # Get the number of male control
num_female_control = # Get the number of female control
print("The number of male control patients is:", num_male_control, \
"\nThe number of female control patients is:", num_female_control, \
"\nAnd the percentage of males is:", num_male_control / (num_male_control + num_female_control), "\t")
Explanation: Number of male control patients
End of explanation
gene1_patients = patients.Gene1
gene1_control = control.Gene1
# Mean
mean_gene1_patients = # Gene1 mean for patients
mean_gene1_control = # Gene1 mean for control
# Median
median_gene1_patients = # Gene1 median for patients
median_gene1_control = # Gene1 median for control
# Std
std_gene1_patients = # Gene1 std for patents
std_gene1_control = # Gene1 std for control
print("Patients: Mean =", mean_gene1_patients, "Median =", median_gene1_patients, "Std =", std_gene1_patients, "\t")
print("Control: Mean =", mean_gene1_control, "Median =", median_gene1_control, "Std =", std_gene1_control, "\t")
Explanation: Results - Percentage of the sexes
(Optional): Is there a significant difference between the percentage of male patients and male control patients?
Exercise 2 - Let the Biostatistics begin
I have selected 6 genes from a total of ~55000. The objective here is for you to try to find genes that are different from the patient group and control group using the tools that you learned on exercise 1.
Gene 1
End of explanation
gene2_patients = patients.Gene2
gene2_control = control.Gene2
# Mean
mean_gene2_patients = # Gene2 mean for patients
mean_gene2_control = # Gene2 mean for control
# Median
median_gene2_patients = # Gene2 median for patients
median_gene2_control = # Gene2 median for control
# Std
std_gene2_patients = # Gene2 std for patents
std_gene2_control = # Gene2 std for control
print("Patients: Mean =", mean_gene2_patients, "Median =", median_gene2_patients, "Std =", std_gene2_patients, "\t")
print("Control: Mean =", mean_gene2_control, "Median =", median_gene2_control, "Std =", std_gene2_control, "\t")
Explanation: Gene 2
End of explanation
gene6_patients = patients.Gene6
gene6_control = control.Gene6
# Mean
mean_gene6_patients = # Gene6 mean for patients
mean_gene6_control = # Gene6 mean for control
# Median
median_gene6_patients = # Gene6 median for patients
median_gene6_control = # Gene6 median for control
# Std
std_gene6_patients = # Gene6 std for patents
std_gene6_control = # Gene6 std for control
print("Patients: Mean =", mean_gene6_patients, "Median =", median_gene6_patients, "Std =", std_gene6_patients, "\t")
print("Control: Mean =", mean_gene6_control, "Median =", median_gene6_control, "Std =", std_gene6_control, "\t")
Explanation: I will just ask for one more gene, since the process is entirely the same!
Gene 6
End of explanation
gene_names = ["Gene1", "Gene2", "Gene3", "Gene4", "Gene5", "Gene6"]
display(# Get the summary of the gene columns for PATIENTS)
display(# Get the summary of the gene columns for CONTROL)
Explanation: Results - Genes 1, 2 and 6
Of the 3 genes, which ones do you believe are involved in the process of recovery?
Help: Recall that we have 2 groups, a group of patients that is recovering from a severe accident and a control group that are fine. You should look at the statistics for the 3 genes (mean, median and standard deviation [this last one is skippable]) and try to find differences!
Can we do this without so much code?
Can we obtain the previous statistics for the 6 genes without all the effort?
Remember: Have u checked out the .describe() method?
End of explanation
display(# Mean of the PATIENT genes / # Mean of the CONTROL genes)
Explanation: What if we want the a measure of difference for each gene?
End of explanation |
15,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 4
Step1: Gradient estimators provide an interface to estimate gradients of some loss with respect to the parameters of some meta-learned system.
GradientEstimator are not specific to learned optimizers, and can be applied to any unrolled system defined by a TruncatedStep (see previous colab).
learned_optimization supports a handful of estimators each with different strengths and weaknesses. Understanding which estimators are right for which situations is an open research question. After providing some introductions to the GradientEstimator class, we provide a quick tour of the different estimators implemented here.
The GradientEstimator base class signature is below.
Step2: A gradient estimator must have an instance of a TaskFamily -- or the task that is being used to estimate gradients with, an init_worker_state function -- which initializes the current state of the gradient estimator, and a compute_gradient_estimate function which takes state and computes a bunch of outputs (GradientEstimatorOut) which contain the computed gradients with respect to the learned optimizer, meta-loss values, and various other information about the unroll. Additionally a mapping which contains various metrics is returned.
Both of these methods take in a WorkerWeights instance. This particular piece of data represents the learnable weights needed to compute a gradients including the weights of the learned optimizer, as well as potentially non-learnable running statistics such as those computed with batch norm. In every case this contains the weights of the meta-learned algorithm (e.g. an optimizer) and is called theta. This can also contain other info though. If the learned optimizer has batchnorm, for example, it could also contain running averages.
In the following examples, we will show gradient estimation on learned optimizers using the VectorizedLOptTruncatedStep.
Step3: FullES
The FullES estimator is one of the simplest, and most reliable estimators but can be slow in practice as it does not make use of truncations. Instead, it uses antithetic sampling to estimate a gradient via ES of an entire optimization (hence the full in the name).
First we define a meta-objective, $f(\theta)$, which could be the loss at the end of training, or average loss. Next, we compute a gradient estimate via ES gradient estimation
Step4: Because we are working with full length unrolls, this gradient estimator has no state -- there is nothing to keep track of truncation to truncation.
Step5: Gradients can be computed with the compute_gradient_estimate method.
Step6: TruncatedPES
Truncated Persistent Evolutionary Strategies (PES) is a unbiased truncation method based on ES. It was proposed in Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies and has been a promising tool for training learned optimizers.
Step7: Now let's look at what this state contains.
Step8: First, this contains 2 instances of SingleState -- one for the positive perturbation, and one for the negative perturbation. Each one of these contains all the necessary state required to keep track of the training run. This means the opt_state, details from the truncation, the task parameters (sample from the task family), the inner_step, and a bool to determine if done or not.
We can compute one gradient estimate as follows.
Step9: This out object contains various outputs from the gradient estimator including gradients with respect to the learned optimizer, as well as the next state of the training models.
Step10: One could simply use these gradients to meta-train, and then use the unroll_states as the next state passed into the compute gradient estimate. For example
Step11: TruncatedGrad
TruncatedGrad performs truncated backprop through time. This is great for short unrolls, but can run into memory issues, and/or exploding gradients for longer unrolls. | Python Code:
import numpy as np
import jax.numpy as jnp
import jax
import functools
from matplotlib import pylab as plt
from typing import Optional, Tuple, Mapping
from learned_optimization.outer_trainers import full_es
from learned_optimization.outer_trainers import truncated_pes
from learned_optimization.outer_trainers import truncated_grad
from learned_optimization.outer_trainers import gradient_learner
from learned_optimization.outer_trainers import truncation_schedule
from learned_optimization.outer_trainers import common
from learned_optimization.outer_trainers import lopt_truncated_step
from learned_optimization.outer_trainers import truncated_step as truncated_step_mod
from learned_optimization.outer_trainers.gradient_learner import WorkerWeights, GradientEstimatorState, GradientEstimatorOut
from learned_optimization.outer_trainers import common
from learned_optimization.tasks import quadratics
from learned_optimization.tasks.fixed import image_mlp
from learned_optimization.tasks import base as tasks_base
from learned_optimization.learned_optimizers import base as lopt_base
from learned_optimization.learned_optimizers import mlp_lopt
from learned_optimization.optimizers import base as opt_base
from learned_optimization import optimizers
from learned_optimization import training
from learned_optimization import eval_training
import haiku as hk
import tqdm
Explanation: Part 4: GradientEstimators
End of explanation
PRNGKey = jnp.ndarray
class GradientEstimator:
truncated_step: truncated_step_mod.TruncatedStep
def init_worker_state(self, worker_weights: WorkerWeights,
key: PRNGKey) -> GradientEstimatorState:
raise NotImplementedError()
def compute_gradient_estimate(
self, worker_weights: WorkerWeights, key: PRNGKey,
state: GradientEstimatorState, with_summary: Optional[bool]
) -> Tuple[GradientEstimatorOut, Mapping[str, jnp.ndarray]]:
raise NotImplementedError()
Explanation: Gradient estimators provide an interface to estimate gradients of some loss with respect to the parameters of some meta-learned system.
GradientEstimator are not specific to learned optimizers, and can be applied to any unrolled system defined by a TruncatedStep (see previous colab).
learned_optimization supports a handful of estimators each with different strengths and weaknesses. Understanding which estimators are right for which situations is an open research question. After providing some introductions to the GradientEstimator class, we provide a quick tour of the different estimators implemented here.
The GradientEstimator base class signature is below.
End of explanation
task_family = quadratics.FixedDimQuadraticFamily(10)
lopt = lopt_base.LearnableAdam()
# With FullES, there are no truncations, so we set trunc_sched to never ending.
trunc_sched = truncation_schedule.NeverEndingTruncationSchedule()
truncated_step = lopt_truncated_step.VectorizedLOptTruncatedStep(
task_family,
lopt,
trunc_sched,
num_tasks=3,
)
Explanation: A gradient estimator must have an instance of a TaskFamily -- or the task that is being used to estimate gradients with, an init_worker_state function -- which initializes the current state of the gradient estimator, and a compute_gradient_estimate function which takes state and computes a bunch of outputs (GradientEstimatorOut) which contain the computed gradients with respect to the learned optimizer, meta-loss values, and various other information about the unroll. Additionally a mapping which contains various metrics is returned.
Both of these methods take in a WorkerWeights instance. This particular piece of data represents the learnable weights needed to compute a gradients including the weights of the learned optimizer, as well as potentially non-learnable running statistics such as those computed with batch norm. In every case this contains the weights of the meta-learned algorithm (e.g. an optimizer) and is called theta. This can also contain other info though. If the learned optimizer has batchnorm, for example, it could also contain running averages.
In the following examples, we will show gradient estimation on learned optimizers using the VectorizedLOptTruncatedStep.
End of explanation
es_trunc_sched = truncation_schedule.ConstantTruncationSchedule(10)
gradient_estimator = full_es.FullES(
truncated_step, truncation_schedule=es_trunc_sched)
key = jax.random.PRNGKey(0)
theta = truncated_step.outer_init(key)
worker_weights = gradient_learner.WorkerWeights(
theta=theta,
theta_model_state=None,
outer_state=gradient_learner.OuterState(0))
Explanation: FullES
The FullES estimator is one of the simplest, and most reliable estimators but can be slow in practice as it does not make use of truncations. Instead, it uses antithetic sampling to estimate a gradient via ES of an entire optimization (hence the full in the name).
First we define a meta-objective, $f(\theta)$, which could be the loss at the end of training, or average loss. Next, we compute a gradient estimate via ES gradient estimation:
$\nabla_\theta f \approx \dfrac{\epsilon}{2\sigma^2} (f(\theta + \epsilon) - f(\theta - \epsilon))$
We can instantiate one of these as follows:
End of explanation
gradient_estimator_state = gradient_estimator.init_worker_state(
worker_weights, key=key)
gradient_estimator_state
Explanation: Because we are working with full length unrolls, this gradient estimator has no state -- there is nothing to keep track of truncation to truncation.
End of explanation
out, metrics = gradient_estimator.compute_gradient_estimate(
worker_weights, key=key, state=gradient_estimator_state, with_summary=False)
out.grad
Explanation: Gradients can be computed with the compute_gradient_estimate method.
End of explanation
trunc_sched = truncation_schedule.ConstantTruncationSchedule(10)
truncated_step = lopt_truncated_step.VectorizedLOptTruncatedStep(
task_family,
lopt,
trunc_sched,
num_tasks=3,
random_initial_iteration_offset=10)
gradient_estimator = truncated_pes.TruncatedPES(
truncated_step=truncated_step, trunc_length=10)
key = jax.random.PRNGKey(1)
theta = truncated_step.outer_init(key)
worker_weights = gradient_learner.WorkerWeights(
theta=theta,
theta_model_state=None,
outer_state=gradient_learner.OuterState(0))
gradient_estimator_state = gradient_estimator.init_worker_state(
worker_weights, key=key)
Explanation: TruncatedPES
Truncated Persistent Evolutionary Strategies (PES) is a unbiased truncation method based on ES. It was proposed in Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies and has been a promising tool for training learned optimizers.
End of explanation
jax.tree_map(lambda x: x.shape, gradient_estimator_state)
Explanation: Now let's look at what this state contains.
End of explanation
out, metrics = gradient_estimator.compute_gradient_estimate(
worker_weights, key=key, state=gradient_estimator_state, with_summary=False)
Explanation: First, this contains 2 instances of SingleState -- one for the positive perturbation, and one for the negative perturbation. Each one of these contains all the necessary state required to keep track of the training run. This means the opt_state, details from the truncation, the task parameters (sample from the task family), the inner_step, and a bool to determine if done or not.
We can compute one gradient estimate as follows.
End of explanation
out.grad
jax.tree_map(lambda x: x.shape, out.unroll_state)
Explanation: This out object contains various outputs from the gradient estimator including gradients with respect to the learned optimizer, as well as the next state of the training models.
End of explanation
print("Progress on inner problem before", out.unroll_state.pos_state.inner_step)
out, metrics = gradient_estimator.compute_gradient_estimate(
worker_weights, key=key, state=out.unroll_state, with_summary=False)
print("Progress on inner problem after", out.unroll_state.pos_state.inner_step)
Explanation: One could simply use these gradients to meta-train, and then use the unroll_states as the next state passed into the compute gradient estimate. For example:
End of explanation
truncated_step = lopt_truncated_step.VectorizedLOptTruncatedStep(
task_family,
lopt,
trunc_sched,
num_tasks=3,
random_initial_iteration_offset=10)
gradient_estimator = truncated_grad.TruncatedGrad(
truncated_step=truncated_step, unroll_length=5, steps_per_jit=5)
key = jax.random.PRNGKey(1)
theta = truncated_step.outer_init(key)
worker_weights = gradient_learner.WorkerWeights(
theta=theta,
theta_model_state=None,
outer_state=gradient_learner.OuterState(0))
gradient_estimator_state = gradient_estimator.init_worker_state(
worker_weights, key=key)
jax.tree_map(lambda x: x.shape, gradient_estimator_state)
out, metrics = gradient_estimator.compute_gradient_estimate(
worker_weights, key=key, state=gradient_estimator_state, with_summary=False)
out.grad
Explanation: TruncatedGrad
TruncatedGrad performs truncated backprop through time. This is great for short unrolls, but can run into memory issues, and/or exploding gradients for longer unrolls.
End of explanation |
15,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison of two dataset
Here we compare two rainfall dataset with each other. The first is a satellite observation dataset, the so called HOAPS climatology and the second is a CMIP5 model results. Both dataset are already regridded to the same spatial grid (T63)
Step1: Now we compare the two dataset by different means.
Step2: From this we see that one dataset covers the entire earth, while the second one only the free ocean. We would therefore like to apply a consistent mask to both of them.
Step3: Let's have a look on the temporal mean differences first
Step4: Scatterplot
An easy scatterplot is generated using the ScatterPlot class. Here we need to provide a reference Dataset as as independent variable. All other datasets are then refrend to that reference.
Step5: Timeseries | Python Code:
# read in the data
from pycmbs.data import Data
h_file = 'hoaps-g.t63.m01.rain.1987-2008_monmean.nc'
m_file = 'pr_Amon_MPI-ESM-LR_amip_r1i1p1_197901-200812_2000-01-01_2007-09-30_T63_monmean.nc'
hoaps = Data(h_file, 'rain', read=True)
model = Data(m_file, 'pr', read=True, scale_factor=86400.) # note the scale factor to convert directly to [mm/d]
model.unit = '$mm d^{-1}$'
Explanation: Comparison of two dataset
Here we compare two rainfall dataset with each other. The first is a satellite observation dataset, the so called HOAPS climatology and the second is a CMIP5 model results. Both dataset are already regridded to the same spatial grid (T63)
End of explanation
%matplotlib inline
# initial plotting
from pycmbs.mapping import map_plot
vmin=0.
vmax=15.
f = map_plot(hoaps,vmin=vmin,vmax=vmax)
f = map_plot(model,vmin=vmin,vmax=vmax)
Explanation: Now we compare the two dataset by different means.
End of explanation
msk = hoaps.get_valid_mask(frac=0.1) # at least 10% of valid data for all timesteps
model._apply_mask(msk)
hoaps._apply_mask(msk) # apply mask also to HOAPS dataset to be entirely consistent
f = map_plot(model,vmin=vmin,vmax=vmax)
Explanation: From this we see that one dataset covers the entire earth, while the second one only the free ocean. We would therefore like to apply a consistent mask to both of them.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(7,10))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
model_m = model.timmean(return_object=True)
hoaps_m = hoaps.timmean(return_object=True)
xx = map_plot(model_m.sub(hoaps_m,copy=True),ax=ax1,cmap_data='RdBu_r',vmin=-5.,vmax=5., use_basemap=True, title='Absolute difference')
xx = map_plot(model_m.sub(hoaps_m,copy=True).div(hoaps_m,copy=True),ax=ax2,cmap_data='RdBu_r',vmin=-1.,vmax=1., use_basemap=True, title='Relative difference')
fig.savefig('precipitation_difference.pdf', bbox_inches='tight')
fig.savefig('precipitation_difference.png', bbox_inches='tight', dpi=200)
Explanation: Let's have a look on the temporal mean differences first
End of explanation
from pycmbs.plots import ScatterPlot
S = ScatterPlot(hoaps_m)
S.plot(model_m,fldmean=False)
S.ax.grid()
S.ax.set_xlim(0.,15.)
S.ax.set_ylim(S.ax.get_xlim())
S.ax.set_title('Mean global precipitation')
S.ax.set_ylabel('model')
S.ax.set_xlabel('HOAPS')
S.ax.figure.savefig('scatterplot.png')
Explanation: Scatterplot
An easy scatterplot is generated using the ScatterPlot class. Here we need to provide a reference Dataset as as independent variable. All other datasets are then refrend to that reference.
End of explanation
from pycmbs.plots import LinePlot
f = plt.figure(figsize=(15,3))
ax = f.add_subplot(111)
L = LinePlot(ax=ax,regress=False)
L.plot(hoaps)
ax.grid()
f.savefig('time_series.png')
Explanation: Timeseries
End of explanation |
15,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HoloViews is designed to be both highly customizable, allowing you to control how your visualizations appear, but also to enforce a strong separation between your data (with any semantically associated metadata, like type, dimension names, and description) and all options related purely to visualization. This separation allows HoloViews objects to be generated easily by external programs, without giving them a dependency on any plotting or windowing libraries. It also helps make it completely clear which parts of your code deal with the actual data, and which are just about displaying it nicely, which becomes very important for complex visualizations that become more complicated than your data itself.
To achieve this separation, HoloViews stores visualization options independently from your data, and applies the options only when rendering the data to a file on disk, a GUI window, or an IPython notebook cell.
This tutorial gives an overview of the different types of options available, how to find out more about them, and how to set them in both regular Python and using the IPython magic interface that is shown elsewhere in the tutorials.
Example objects
First, we'll create some HoloViews data objects ready to visualize
Step1: Rendering and saving objects from Python <a id='python-saving'></a>
To illustrate how to do plotting independently of IPython, we'll generate and save a plot directly to disk. First, let's create a renderer object that will render our files to SVG (for static figures) or GIF (for animations)
Step2: We could instead have used the default Store.renderer, but that would have been PNG format. Using this renderer, we can save any HoloViews object as SVG or GIF
Step3: That's it! The renderer builds the figure in matplotlib, renders it to SVG, and saves that to "example_I.svg" on disk. Everything up to this point would have worked the same in IPython or in regular Python, even with no display available. But since we're in IPython Notebook at the moment, we can check whether the exporting worked, by loading the file back into the notebook
Step4: You can use this workflow for generating HoloViews visualizations directly from Python, perhaps as a part of a set of scripts that you run automatically, e.g. to put your results up on a web server as soon as data is generated. But so far, this plot just uses all the default options, with no customization. How can we change how the plot will appear when we render it?
HoloViews visualization options
HoloViews provides three categories of visualization options that can be set by the user. In this section we will first describe the different kinds of options, then later sections show you how to list the supported options of each type for a given HoloViews object or class, and how to change them in Python or IPython.
style options
Step5: This information can be useful, but we have explicitly suppressed information regarding the visualization parameters -- these all report metadata about your data, not about anything to do with plotting directly. That's because the normal HoloViews components have nothing to do with plotting; they are just simple containers for your data and a small amount of metadata.
Instead, the plotting implementation and its associated parameters are kept in completely separate Python classes and objects. To find out about visualizing a HoloViews component like an Image, you can simply use the help command holoviews.help(object-or-class) that looks up the code that plots that particular type of component, and then reports the style and plot options available for it.
For our image example, holoviews.help first finds that image is of type Image, then looks in its database to find that Image visualization is handled by the RasterPlot class (which users otherwise rarely need to access directly). holoviews.help then shows information about what objects are available to customize (either the object itself, or the items inside a container), followed by a brief list of style options supported by a RasterPlot, and a list of plot options (which are all the parameters of a RasterPlot). As this list of plot options is very long by default, here is an example that uses the pattern argument to limit the results to the options referencing the string 'bounds'
Step6: The pattern option is particularly useful in conjunction with recursive=True which helps when searching for information across the different levels of a composite object. Note that the pattern argument supports Python's regular expression syntax and may also be used together with the visualization=False option.
Supported style options
As you can see, HoloViews lists the currently allowed style options, but provides no further documentation because these settings are implemented by matplotlib and described at the matplotlib site. Note that matplotlib actually accepts a huge range of additional options, but they are not listed as being allowed because those options are not normally meaningful for this plot type. But if you know of a specific matplotlib option not on the list and really want to use it, you can add it manually to the list of supported options using Store.add_style_opts(holoviews-component-class, ['matplotlib-option ...']). For instance, if you want to use the filternorm parameter with this image object, you would run Store.add_style_opts(Image, ['filternorm']). This will add the new option to the corresponding plotting class RasterPlot, ready for use just like any other style option
Step7: Changing plot options at the class level
Any parameter in HoloViews can be set on an object or on the class of the object, so any of the above plot options can be set like
Step8: Here .set_param() allows you to set multiple parameters conveniently, but it works the same as the single-parameter .colorbar example above it. Setting these values at the class level affects all previously created and to-be-created plotting objects of this type, unless specifically overridden via Store as described below.
Note that if you look at the source code for a particular plotting class, you will only see some of the parameters it supports. The rest, such as show_frame above, are defined in a superclass of the given object. The Reference Manual shows the complete list of parameters available for any given class (those labeled param in the manual), but it can be an overwhelming list since it includes all superclasses, all the metadata about each parameter, etc. The holoviews.help command with visualization=True not only provides a much more concise listing, it can will also provide style options not available in the Reference Manual, by using the database to determine which plotting class is associated with this object.
Because setting these parameters at the class level does not provide much control over individual plots, HoloViews provides a much more flexible system using the OptionTree mechanisms described below, which can override these class defaults according to the more specific HoloViews object type, group, and label attributes.
The rest of the sections show how to change any of the above options, once you have found the right one using the suitable call to holoviews.help.
Controlling options from Python
Once you know the name of the option you want to change, and the value you want to change it to, there are a number of ways to customize your plot.
For the Python output to SVG example above, you can specify the options for a given type using keywords supplying a dictionary for any of the above option categories. You can see that the colormap changes when we supply that style option and render a new SVG
Step9: As before, the SVG call is simply to display it here in the notebook; the actual image is saved on disk and then loaded back in here for display.
You can see that the image now has a colorbar, because we set colorbar=True on the RasterPlot class, that it has become blue, because we set the matplotlib cmap style option in the renderer.save call, and that the y axis has been disabled, because we set the plot option yaxis to None (which is normally 'left' by default, as you can see in the default value for RasterPlot's parameter yaxis above). Hopefully you can see that once you know the option value you want to use, it can be provided easily.
You can also create a whole set of options separately, perhaps holding a large collection of preferred values, and apply it whenever you wish to save
Step10: Here you can see that the y axis has returned, because our previous setting to turn it off was just for the call to renderer.save. But we still have a colorbar, because that parameter was set at the class level, for all future plots of this type. Note that this form of option setting, while more verbose, accepts the full {type}[.{group}[.{label}]] syntax, like 'Image.Function.Sine' or 'Image.Function', while the shorter keyword approach above only supports the class, like 'Image'.
Note that for the options dictionary, the option nesting is inverted compared to the keyword approach
Step11: Here we could save the object to SVG just as before, but in this case we can skip a step and simply view it directly in the notebook
Step12: To customize options of individual components in composite objects like Overlays or Layouts you can either specify the options on each individual component or specify which object to customize using the {type}[.{group}[.{label}]] syntax.
Step13: Both IPython notebook and renderer.save() use the same mechanisms for keeping track of the options, so they will give the same results. Specifically, what happens when you "bind" a set of options to an object is that there is an integer ID stored in the object (green_sine in this case), and a corresponding entry with that ID is stored in a database of options called an OptionTree (kept in holoviews.core.options.Store). The object itself is otherwise unchanged, but then if that object is later used in another container, etc. it will retain its ID and therefore its customization. Any customization stored in an OptionTree will override any class attribute defaults set like RasterGridPlot.border=5 above. This approach lets HoloViews keep track of any customizations you want to make, without ever affecting your actual data objects.
If the same object is later customized again to create a new customized object, the old customizations will be copied, and then the new customizations applied. The new customizations will thus override the old, while retaining any previous customizations not specified in the new step.
In this way, it is possible to build complex objects with arbitrary customization, step by step. As mentioned above, it is also possible to customize objects already combined into a complex container, just by specifying an option for a suitable key (e.g. 'Image.Function.Sine' above). This flexible system should allow for any level of customization that is needed.
Finally, there is one more way to apply options that is a mix of the above approaches -- temporarily assign a new ID to the object and apply a set of customizations during a specific portion of the code. To illustrate this, we'll create a new Image object called 'Cosine'
Step14: Here the result is in red as it was generated in the context of a 'Reds' colormap but if we display cosine again outside the scope of the with statement, it retains the default settings
Step15: Note that if we want to use this context manager to set new options on the existing green_sine object, you must specify that the options apply to a specific Image by stating the applicable group and label
Step16: Now the result inside the context is purple but elswhere green_sine remains green. If the group and label had not been specified above, the specific customization applied earlier (setting the green colormap) would take precedence over the general settings of Image. For this reason, it is important to know the appropriate precedence of new customizations, or else you can just always specify the object group and label to make sure the new settings override the old ones.
Controlling options in IPython using %%opts and %opts
The above sections describe how to set all of the options using regular Python. Similar functionality is provided in IPython, but with a more convenient syntax based on an IPython magic command
Step17: The %%opts magic works like the pure-Python option for associating options with an object, except that it works on the item in the IPython cell, and it affects the item directly rather than making a copy or applying only in scope. Specifically, it assigns a new ID number to the object returned from this cell, and makes a new OptionTree containing the options for that ID number.
If the same layout object is used later in the notebook, even within a complicated container object, it will retain the options set on it.
The options accepted are just the same as for the Python version, but specified more succinctly
Step18: The color of the curve has been changed to red and the fontsizes of the x-axis label and all the tick labels have been modified. The fontsize is an important plot option, and you can find more information about the available options in the fontsize documentation above.
The %%opts magic is designed to allow incremental customization, which explains why the curve in the cell above has retained the increased thickness specified earlier. To reset all the customizations that have been applied to an object, you can create a fresh, uncustomized copy as follows
Step19: The %opts "line" magic (with one %) works just the same as the %%opts "cell" magic, but it changes the global default options for all future cells, allowing you to choose a new default colormap, line width, etc.
Apart from its brevity, a big benefit of using the IPython magic syntax %%opts or %opts is that it is fully tab-completable. Each of the options that is currently available will be listed if you press <TAB> when you are ready to write it, which makes it much easier to find the right parameter. Of course, you will still need to consult the full holoviews.help documentation (described above) to see the type, allowable values, and documentation for each option, but the tab completion should at least get you started and is great for helping you remember the list of options and see which options are available.
You can even use the succinct IPython-style specification directly in your Python code if you wish, but it requires the external pyparsing library (which is already available if you are using matplotlib)
Step20: There is also a special IPython syntax for listing the visualization options for a plotting object in a pop-up window that is equivalent to calling holoviews.help(object) | Python Code:
import numpy as np
import holoviews as hv
hv.notebook_extension()
x,y = np.mgrid[-50:51, -50:51] * 0.1
image = hv.Image(np.sin(x**2+y**2), group="Function", label="Sine")
coords = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
curve = hv.Curve(coords)
curves = {phase: hv.Curve([(0.1*i, np.sin(phase+0.1*i)) for i in range(100)])
for phase in [0, np.pi/2, np.pi, np.pi*3/2]}
waves = hv.HoloMap(curves)
layout = image + curve
Explanation: HoloViews is designed to be both highly customizable, allowing you to control how your visualizations appear, but also to enforce a strong separation between your data (with any semantically associated metadata, like type, dimension names, and description) and all options related purely to visualization. This separation allows HoloViews objects to be generated easily by external programs, without giving them a dependency on any plotting or windowing libraries. It also helps make it completely clear which parts of your code deal with the actual data, and which are just about displaying it nicely, which becomes very important for complex visualizations that become more complicated than your data itself.
To achieve this separation, HoloViews stores visualization options independently from your data, and applies the options only when rendering the data to a file on disk, a GUI window, or an IPython notebook cell.
This tutorial gives an overview of the different types of options available, how to find out more about them, and how to set them in both regular Python and using the IPython magic interface that is shown elsewhere in the tutorials.
Example objects
First, we'll create some HoloViews data objects ready to visualize:
End of explanation
renderer = hv.Store.renderers['matplotlib'].instance(fig='svg', holomap='gif')
Explanation: Rendering and saving objects from Python <a id='python-saving'></a>
To illustrate how to do plotting independently of IPython, we'll generate and save a plot directly to disk. First, let's create a renderer object that will render our files to SVG (for static figures) or GIF (for animations):
End of explanation
renderer.save(layout, 'example_I')
Explanation: We could instead have used the default Store.renderer, but that would have been PNG format. Using this renderer, we can save any HoloViews object as SVG or GIF:
End of explanation
from IPython.display import SVG
SVG(filename='example_I.svg')
Explanation: That's it! The renderer builds the figure in matplotlib, renders it to SVG, and saves that to "example_I.svg" on disk. Everything up to this point would have worked the same in IPython or in regular Python, even with no display available. But since we're in IPython Notebook at the moment, we can check whether the exporting worked, by loading the file back into the notebook:
End of explanation
hv.help(image, visualization=False)
Explanation: You can use this workflow for generating HoloViews visualizations directly from Python, perhaps as a part of a set of scripts that you run automatically, e.g. to put your results up on a web server as soon as data is generated. But so far, this plot just uses all the default options, with no customization. How can we change how the plot will appear when we render it?
HoloViews visualization options
HoloViews provides three categories of visualization options that can be set by the user. In this section we will first describe the different kinds of options, then later sections show you how to list the supported options of each type for a given HoloViews object or class, and how to change them in Python or IPython.
style options:
style options are passed directly to the underlying rendering backend that actually draws the plots, allowing you to control the details of how it behaves. The default backend is matplotlib, but there are other backends either using matplotlib's options (e.g. mpld3), or their own sets of options (e.g. bokeh ).
For whichever backend has been selected, HoloViews can tell you which options are supported, but you will need to see the plotting library's own documentation (e.g. matplotlib, bokeh) for the details of their use.
HoloViews has been designed to be easily extensible to additional backends in the future, such as Plotly, Cairo, VTK, or D3.js, and if one of those backends were selected then the supported style options would differ.
plot options:
Each of the various HoloViews plotting classes declares various Parameters that control how HoloViews builds the visualization for that type of object, such as plot sizes and labels. HoloViews uses these options internally; they are not simply passed to the underlying backend. HoloViews documents these options fully in its online help and in the Reference Manual. These options may vary for different backends in some cases, depending on the support available both in that library and in the HoloViews interface to it, but we try to keep any options that are meaningful for a variety of backends the same for all of them.
norm options:
norm options are a special type of plot option that are applied orthogonally to the above two types, to control normalization. Normalization refers to adjusting the properties of one plot relative to those of another. For instance, two images normalized together would appear with relative brightness levels, with the brightest image using the full range black to white, while the other image is scaled proportionally. Two images normalized independently would both cover the full range from black to white. Similarly, two axis ranges normalized together will expand to fit the largest range of either axis, while those normalized separately would cover different ranges.
There are currently only two norm options supported, axiswise and framewise, but they can be applied to any of the various object types in HoloViews to specify a huge range of different normalization options.
For a given category or group of HoloViews objects, if axiswise is True, normalization will be computed independently for all items in that category that have their own axes, such as different Image plots or Curve plots. If axiswise is False, all such objects are normalized together.
For a given category or group of HoloViews objects, if framewise is True, normalization of any HoloMap objects included is done independently per frame rendered -- each frame will appear as it would if it were extracted from the HoloMap and plotted separately. If framewise is False (the default), all frames in a given HoloMap are normalized together, so that you can see strength differences over the course of the animation.
As described below, these options can be controlled precisely and in any combination to make sure that HoloViews displays the data of most interest, ignoring irrelevant differences and highlighting important ones.
Finding out which options are available for an object
For the norm options, no further online documentation is provided, because all of the various visualization classes support only the two options described above. But there are a variety of ways to get the list of supported style options and detailed documentation for the plot options for a given component.
First, for any Python class or object in HoloViews, you can use holoviews.help(object-or-class, visualization=False) to find out about its parameters. For instance, these parameters are available for our Image object, shown with their current value (or default value, for a class), data type, whether it can be changed by the user (if it is constant, read-only, etc.), and bounds if any:
End of explanation
hv.help(image, pattern='bounds')
Explanation: This information can be useful, but we have explicitly suppressed information regarding the visualization parameters -- these all report metadata about your data, not about anything to do with plotting directly. That's because the normal HoloViews components have nothing to do with plotting; they are just simple containers for your data and a small amount of metadata.
Instead, the plotting implementation and its associated parameters are kept in completely separate Python classes and objects. To find out about visualizing a HoloViews component like an Image, you can simply use the help command holoviews.help(object-or-class) that looks up the code that plots that particular type of component, and then reports the style and plot options available for it.
For our image example, holoviews.help first finds that image is of type Image, then looks in its database to find that Image visualization is handled by the RasterPlot class (which users otherwise rarely need to access directly). holoviews.help then shows information about what objects are available to customize (either the object itself, or the items inside a container), followed by a brief list of style options supported by a RasterPlot, and a list of plot options (which are all the parameters of a RasterPlot). As this list of plot options is very long by default, here is an example that uses the pattern argument to limit the results to the options referencing the string 'bounds':
End of explanation
hv.Store.add_style_opts(hv.Image, ['filternorm'])
# To check that it worked:
RasterPlot = renderer.plotting_class(hv.Image)
print(RasterPlot.style_opts)
Explanation: The pattern option is particularly useful in conjunction with recursive=True which helps when searching for information across the different levels of a composite object. Note that the pattern argument supports Python's regular expression syntax and may also be used together with the visualization=False option.
Supported style options
As you can see, HoloViews lists the currently allowed style options, but provides no further documentation because these settings are implemented by matplotlib and described at the matplotlib site. Note that matplotlib actually accepts a huge range of additional options, but they are not listed as being allowed because those options are not normally meaningful for this plot type. But if you know of a specific matplotlib option not on the list and really want to use it, you can add it manually to the list of supported options using Store.add_style_opts(holoviews-component-class, ['matplotlib-option ...']). For instance, if you want to use the filternorm parameter with this image object, you would run Store.add_style_opts(Image, ['filternorm']). This will add the new option to the corresponding plotting class RasterPlot, ready for use just like any other style option:
End of explanation
RasterPlot.colorbar=True
RasterPlot.set_param(show_title=False,show_frame=True)
Explanation: Changing plot options at the class level
Any parameter in HoloViews can be set on an object or on the class of the object, so any of the above plot options can be set like:
End of explanation
renderer.save(layout, 'example_II', style=dict(Image={'cmap':'Blues'}),
plot= dict(Image={'yaxis':None}))
SVG(filename='example_II.svg')
Explanation: Here .set_param() allows you to set multiple parameters conveniently, but it works the same as the single-parameter .colorbar example above it. Setting these values at the class level affects all previously created and to-be-created plotting objects of this type, unless specifically overridden via Store as described below.
Note that if you look at the source code for a particular plotting class, you will only see some of the parameters it supports. The rest, such as show_frame above, are defined in a superclass of the given object. The Reference Manual shows the complete list of parameters available for any given class (those labeled param in the manual), but it can be an overwhelming list since it includes all superclasses, all the metadata about each parameter, etc. The holoviews.help command with visualization=True not only provides a much more concise listing, it can will also provide style options not available in the Reference Manual, by using the database to determine which plotting class is associated with this object.
Because setting these parameters at the class level does not provide much control over individual plots, HoloViews provides a much more flexible system using the OptionTree mechanisms described below, which can override these class defaults according to the more specific HoloViews object type, group, and label attributes.
The rest of the sections show how to change any of the above options, once you have found the right one using the suitable call to holoviews.help.
Controlling options from Python
Once you know the name of the option you want to change, and the value you want to change it to, there are a number of ways to customize your plot.
For the Python output to SVG example above, you can specify the options for a given type using keywords supplying a dictionary for any of the above option categories. You can see that the colormap changes when we supply that style option and render a new SVG:
End of explanation
options={'Image.Function.Sine': {'plot':dict(fig_size=50), 'style':dict(cmap='jet')}}
renderer.save(layout, 'example_III',options=options)
SVG(filename='example_III.svg')
Explanation: As before, the SVG call is simply to display it here in the notebook; the actual image is saved on disk and then loaded back in here for display.
You can see that the image now has a colorbar, because we set colorbar=True on the RasterPlot class, that it has become blue, because we set the matplotlib cmap style option in the renderer.save call, and that the y axis has been disabled, because we set the plot option yaxis to None (which is normally 'left' by default, as you can see in the default value for RasterPlot's parameter yaxis above). Hopefully you can see that once you know the option value you want to use, it can be provided easily.
You can also create a whole set of options separately, perhaps holding a large collection of preferred values, and apply it whenever you wish to save:
End of explanation
green_sine = image(style={'cmap':'Greens'})
Explanation: Here you can see that the y axis has returned, because our previous setting to turn it off was just for the call to renderer.save. But we still have a colorbar, because that parameter was set at the class level, for all future plots of this type. Note that this form of option setting, while more verbose, accepts the full {type}[.{group}[.{label}]] syntax, like 'Image.Function.Sine' or 'Image.Function', while the shorter keyword approach above only supports the class, like 'Image'.
Note that for the options dictionary, the option nesting is inverted compared to the keyword approach: the outermost dictionary is by key (Image, or Image.Function.Sines), with the option categories underneath. You can see that with this mechanism, we can specify the options even for subobjects of a container, as long as we can specify them with an appropriate key.
There's also another way to customize options in Python that lets you build up customizations incrementally. To do this, you can associate a particular set of options persistently with a particular HoloViews object, even if that object is later combined with other objects into a container. Here a new copy of the object is created, with the given set of options (using either the keyword or options= format above) bound to it:
End of explanation
green_sine
Explanation: Here we could save the object to SVG just as before, but in this case we can skip a step and simply view it directly in the notebook:
End of explanation
(image + curve)(style={'Image.Function.Sine': dict(cmap='Reds'), 'Curve': dict(color='indianred')})
Explanation: To customize options of individual components in composite objects like Overlays or Layouts you can either specify the options on each individual component or specify which object to customize using the {type}[.{group}[.{label}]] syntax.
End of explanation
cosine = hv.Image(np.cos(x**2+y**2), group="Function", label="Cosine")
with hv.StoreOptions.options(cosine, options={'Image':{'style':{'cmap':'Reds'}}}):
data, info = renderer(cosine)
print(info)
SVG(data)
Explanation: Both IPython notebook and renderer.save() use the same mechanisms for keeping track of the options, so they will give the same results. Specifically, what happens when you "bind" a set of options to an object is that there is an integer ID stored in the object (green_sine in this case), and a corresponding entry with that ID is stored in a database of options called an OptionTree (kept in holoviews.core.options.Store). The object itself is otherwise unchanged, but then if that object is later used in another container, etc. it will retain its ID and therefore its customization. Any customization stored in an OptionTree will override any class attribute defaults set like RasterGridPlot.border=5 above. This approach lets HoloViews keep track of any customizations you want to make, without ever affecting your actual data objects.
If the same object is later customized again to create a new customized object, the old customizations will be copied, and then the new customizations applied. The new customizations will thus override the old, while retaining any previous customizations not specified in the new step.
In this way, it is possible to build complex objects with arbitrary customization, step by step. As mentioned above, it is also possible to customize objects already combined into a complex container, just by specifying an option for a suitable key (e.g. 'Image.Function.Sine' above). This flexible system should allow for any level of customization that is needed.
Finally, there is one more way to apply options that is a mix of the above approaches -- temporarily assign a new ID to the object and apply a set of customizations during a specific portion of the code. To illustrate this, we'll create a new Image object called 'Cosine':
End of explanation
cosine
Explanation: Here the result is in red as it was generated in the context of a 'Reds' colormap but if we display cosine again outside the scope of the with statement, it retains the default settings:
End of explanation
with hv.StoreOptions.options(green_sine, options={'Image.Function.Sine':{'style':{'cmap':'Purples'}}}):
data, info = renderer(green_sine)
print(info)
SVG(data)
Explanation: Note that if we want to use this context manager to set new options on the existing green_sine object, you must specify that the options apply to a specific Image by stating the applicable group and label:
End of explanation
%%opts Curve style(linewidth=8) Image style(interpolation='bilinear') plot[yaxis=None] norm{+framewise}
layout
Explanation: Now the result inside the context is purple but elswhere green_sine remains green. If the group and label had not been specified above, the specific customization applied earlier (setting the green colormap) would take precedence over the general settings of Image. For this reason, it is important to know the appropriate precedence of new customizations, or else you can just always specify the object group and label to make sure the new settings override the old ones.
Controlling options in IPython using %%opts and %opts
The above sections describe how to set all of the options using regular Python. Similar functionality is provided in IPython, but with a more convenient syntax based on an IPython magic command:
End of explanation
%%opts Curve (color='r') [fontsize={'xlabel':15, 'ticks':8}]
layout
Explanation: The %%opts magic works like the pure-Python option for associating options with an object, except that it works on the item in the IPython cell, and it affects the item directly rather than making a copy or applying only in scope. Specifically, it assigns a new ID number to the object returned from this cell, and makes a new OptionTree containing the options for that ID number.
If the same layout object is used later in the notebook, even within a complicated container object, it will retain the options set on it.
The options accepted are just the same as for the Python version, but specified more succinctly:
%%opts target-specification style(styleoption=val ...) plot[plotoption=val ...] norm{+normoption -normoption...}
Here key lets you specify the object type (e.g. Image), and optionally its group (e.g. Image.Function) or even both group and label (e.g. Image.Function.Sine), if you want to control options very precisely. There is also an even further abbreviated syntax, because the special bracket types alone are enough to indicate which category of option is specified:
%%opts target-specification (styleoption=val ...) [plotoption=val ...] {+normoption -normoption ...}
Here parentheses indicate style options, square brackets indicate plot options, and curly brackets indicate norm options (with +axiswise and +framewise indicating True for those values, and -axiswise and -framewise indicating False). Additional target-specifications and associated options of each type for that target-specification can be supplied at the end of this line. This ultra-concise syntax is used throughout the other tutorials, because it helps minimize the code needed to specify the plotting options, and helps make it very clear that these options are handled separately from the actual data.
Here we demonstrate the concise syntax by customizing the style and plot options of the Curve in the layout:
End of explanation
layout()
Explanation: The color of the curve has been changed to red and the fontsizes of the x-axis label and all the tick labels have been modified. The fontsize is an important plot option, and you can find more information about the available options in the fontsize documentation above.
The %%opts magic is designed to allow incremental customization, which explains why the curve in the cell above has retained the increased thickness specified earlier. To reset all the customizations that have been applied to an object, you can create a fresh, uncustomized copy as follows:
End of explanation
from holoviews.ipython.parser import OptsSpec
renderer.save(image + waves, 'example_V',
options=OptsSpec.parse("Image (cmap='gray')"))
Explanation: The %opts "line" magic (with one %) works just the same as the %%opts "cell" magic, but it changes the global default options for all future cells, allowing you to choose a new default colormap, line width, etc.
Apart from its brevity, a big benefit of using the IPython magic syntax %%opts or %opts is that it is fully tab-completable. Each of the options that is currently available will be listed if you press <TAB> when you are ready to write it, which makes it much easier to find the right parameter. Of course, you will still need to consult the full holoviews.help documentation (described above) to see the type, allowable values, and documentation for each option, but the tab completion should at least get you started and is great for helping you remember the list of options and see which options are available.
You can even use the succinct IPython-style specification directly in your Python code if you wish, but it requires the external pyparsing library (which is already available if you are using matplotlib):
End of explanation
%%output info=True
curve
Explanation: There is also a special IPython syntax for listing the visualization options for a plotting object in a pop-up window that is equivalent to calling holoviews.help(object):
End of explanation |
15,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arbeitsgrundlagen
Die Geschwindigkeit eines Objektes kann durch \ref{eq
Step1: Auswertung
Dieses Kapitel befasst sich mit den Möglichkeiten und Tricks der Fehlerrechnung. Normalerweise würde dieses Kapitel separat geführt, jedoch ist das Ziel dieses Versuches, die Fehlerrechnung näher kennenzulernen.
Step2: Schallgeschwindigkeit
Die Schallgeschwindigkeit soll durch die Mittlere Laufzeit über eine bekannte Strecke bestimmt werden.
Messwerte
<center>
<t>Länge der Messstrecke</t>
Step3: <np />
Eisengehalt
Von einer Legierung soll der Eisengehalt bestimmt werden. Dazu wurden verschiedene Proben gemacht und ihr Gehalt bestummen. Durch Mittelung soll der wahre Wert angenähert werden.
Messwerte
Step4: {{v2_data}}
Einfacher Mittelwert
Der einfache Mittelwert und sein Fehler ergeben sich analog zu Aufgabe 1.
<center>
$\overline{x}$ = {{'{0
Step5: <np />
Federkonstante
Messwerte
Step6: {{v3_data}}
Rechnung mittels Taschenrechner
Lineare Regression
Mit dem Rechner
Die Steigung der Regressionsgeraden und somit die Federkonstante $k$ wird wie folgt erhalten
Step7: Der Fit errechnet die folgenden relevenanten Werte für das Experiment
Step8: Offset, Amplitude, Frequenz und Phase eines Pendels
Von einem Pendel ist die Auslenkung in y-Richtung zu verschiedenen Zeitpunkten $t_i$ bekannt.
Mithilfe der Methode der kleinsten Quadrate können Offset, Amplitude, Frequenz und Phase des Pendels bestimmt werden.
Die Funktion des Pendels welche mit dem Fit angenähert wird schreibt sich wie folgt
Step9: <np />
Tiefpass
Am Eingang eines RC-Tiefpassfilters wurde eine sinusförmige Wechselspannung mit einer Amplitude
$U_E = 4 V_{pp}$ und einer variabler Frequenz angelegt. Gemessen wurde die Ausgangsspannung $U_A$
sowie die Phasenverschiebung $\varphi$ in Abhängigkeit der Frequenz $f$.
Der Widerstand $R$ ist mit $R=500\Omega$ beziffert.
Step10: Messwerte
<center>
$U_e$ = {{v5_Ue}}$V_{pp}$ => ±{{v5_Ue / 2}}$\hat{V}$
$R$ = {{v5_R}}$\Omega$
</center>
{{v5_data}}
Berechnung von C
Die Kapazität C kann durch zwei verschiedene Funktionen bestimmt werden | Python Code:
# Preparations
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import numpy as np
from scipy import stats
from scipy.optimize import curve_fit
import seaborn as sns
from IPython.display import Latex
import warnings
from PrettyTable import PrettyTable
warnings.filterwarnings("ignore", module="matplotlib")
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
plt.rcParams['savefig.dpi'] = 75
# plt.rcParams['figure.autolayout'] = False
# plt.rcParams['figure.figsize'] = 10, 6
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 14
plt.rcParams['text.usetex'] = True
plt.rcParams['text.latex.unicode'] = True
plt.rcParams['font.family'] = "STIX"
plt.rcParams['text.latex.preamble'] = "\\usepackage{subdepth}, \\usepackage{type1cm}"
sns.set(color_codes=True)
def average(data):
return 1 / len(data) * sum(data)
def error(data, average_of_data):
s = sum([(x - average_of_data)**2 for x in data])
return math.sqrt(s / (len(data) * (len(data) - 1)))
def std_deviation(error_of_average, length_of_dataset):
return error_of_average * math.sqrt(length_of_dataset)
def average_with_weights(data, weights):
d = data
w = weights
return (d * w**-2).sum() / (w**-2).sum()
def error_with_weights(weights):
w = weights
return 1 / math.sqrt((w**-2).sum())
def wavg(group, avg_name, weight_name):
d = group[avg_name]
w = group[weight_name]
return (d * w**-2).sum() / (w**-2).sum()
def werr(group, weight_name):
return 1 / math.sqrt((group[weight_name]**-2).sum())
Explanation: Arbeitsgrundlagen
Die Geschwindigkeit eines Objektes kann durch \ref{eq:velocity} berechnet werden.
\begin{equation}
v = \frac{s}{t}
\label{eq:velocity}
\end{equation}
Durchführung
Die Messwerte zu den einzelnen Versuchen wurden durch den Dozenten zur Verfügung gestellt. Sie sind alle im Anhang vorzufinden.
Die zur Versuchsdurchführung verwendeten Tools beinhalten den Taschenrechner, Excel und Python mit diversen Libraries (Jupyter, Scipy, Pandas, Matplotlib, Seaborn).
<np />
End of explanation
# Evaluate Data
# Read Data
v1_df = pd.read_csv('data/laufzeiten.csv')
v1_s = 2.561
v1_s_u = 0.003
v1_theta = 23
# Calculate mean etc.
v1_mean = v1_df.mean()['t']
v1_sem = v1_df.sem()['t']
v1_std = v1_df.std()['t']
# Calculate velocity & error
v1_v = v1_s / v1_mean
v1_v_u = math.sqrt((v1_s_u / v1_mean)**2 + (-v1_s * v1_sem / v1_mean**2)**2)
v1_error_percent_v = v1_v_u / v1_v * 100
v1_error_percent_t = v1_sem / v1_mean * 100
v1_data = PrettyTable(list(zip(v1_df['measurement'], v1_df['t'])), entries_per_column=10, extra_header=['Messung [1]', 'Laufzeit [s]'])
Explanation: Auswertung
Dieses Kapitel befasst sich mit den Möglichkeiten und Tricks der Fehlerrechnung. Normalerweise würde dieses Kapitel separat geführt, jedoch ist das Ziel dieses Versuches, die Fehlerrechnung näher kennenzulernen.
End of explanation
# Plot all values
ax = v1_df.plot(kind='scatter', x='measurement', y='t', label='gemessene Laufzeit')
plt.xlabel('Messung [1]')
plt.xlim([0, len(v1_df['t']) + 1])
plt.ylim([0.0065, 0.0085])
plt.ylabel('t [ms]')
plt.axhline(y=v1_mean, axes=ax, color='red', label='Mittelwert')
plt.axhline(y=v1_mean+v1_sem, axes=ax, color='green', label='Mittelwert ± Fehler')
plt.axhline(y=v1_mean-v1_sem, axes=ax, color='green')
plt.axhline(y=v1_mean+v1_std, axes=ax, color='purple', label='Mittelwert ± Standardabweichung')
plt.axhline(y=v1_mean-v1_std, axes=ax, color='purple')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.show()
#plt.savefig('laufzeiten.pgf')
Explanation: Schallgeschwindigkeit
Die Schallgeschwindigkeit soll durch die Mittlere Laufzeit über eine bekannte Strecke bestimmt werden.
Messwerte
<center>
<t>Länge der Messstrecke</t>: s = {{v1_s}}±{{v1_s_u}}m
<t>Raumtemparatur</t>: $\theta$ = {{v1_theta}}$^{\circ}$C
</center>
{{v1_data}}
<np />
Mittlere Laufzeit und ihre Unsicherheit laut Wikipedia
Wikipedia führt eine Formel \ref{eq:vschall} zur Berechnung der Schallgeschwindigkeit bei einer bestimmten Temperatur. Diese kann auch in Horst Kuchlings Taschenbuch der Physik gefunden werden.
\begin{equation}
c_{luft} = (331.3 + 0.606 \cdot \theta) \frac{m}{s} = (331.3 + 0.606 \cdot 23) \frac{m}{s} = 345.24\frac{m}{s}
\label{eq:vschall}
\end{equation}
Mittlere Laufzeit und ihre Unsicherheit
<center>
<t>Mittlere Laufzeit</t>: $\overline{t}$ = $\frac{1}{20} \sum_{i=1}^{20}{t_i}$ = {{'{0:.2f}'.format(v1_mean*1e3)}}ms
<t>Fehler der mittleren Laufzeit</t>: $s_{\overline{i}}$ = $\sqrt{\frac{\sum_{1}^{20}{(t_i-\overline{t})^2}}{20 \cdot 19}}$ = {{'{0:.6f}'.format(v1_sem)}}ms
<t>Standardabweichung</t>: s = $\sqrt{\frac{\sum_1^{20}{(t_i-\overline{t})^2}}{19}}$ = {{'{0:.5f}'.format(v1_std)}}ms
</center>
Mithilfe des zuvor ermittelten Mittelwertes kann die Mittlere Schallgeschwindigkeit als:
<center>
$\overline{c}$ = {{('{0:.2f}'.format(v1_v))}}$\frac{m}{s}$
</center>
festgestellt werden.
Die Unicherheit des Mittelwertes der Schallgeschwindigkeit kann mithilfe des Gauss'schen Fehlerfortpflanzungsgesetztes ersichtlich in (1) errechnet werden.
<center>
$R(x, y)$ = $c(s, t)$ = $\frac{s}{t}$
$S_{\overline{R}}$ = $\sqrt{(\frac{\partial R}{\partial x}|{\overline{R}}\cdot s{\overline{x}})^2 + (\frac{\partial R}{\partial y}|{\overline{R}}\cdot s{\overline{y}})^2}$
$S_{\overline{R}}$ = $\sqrt{(\frac{1}{\overline{t}}s_{\overline{s}})^2+(-\frac{\overline{s}}{\overline{t}^2}s_{\overline{t}})^2}$
$s_{\overline{s}}$ = {{'{0:.2f}'.format(v1_v_u)}}$\frac{m}{s}$
<t>Relativer Fehler der Zeit</t>: {{'{0:.2f}'.format(v1_error_percent_t)}}%
<t>Relativer Fehler der Geschwidigkeit</t>: {{'{0:.2f}'.format(v1_error_percent_v)}}%
</center>
End of explanation
# Evaluate Data
# Read Data
v2_df = pd.read_csv('data/eisengehalt.csv')
# Calculate mean etc.
v2_mean = v2_df.mean()['content']
v2_sem = v2_df.sem()['content']
v2_weightedmean = wavg(v2_df, 'content', 'error')
v2_weightedsem = werr(v2_df, 'error')
v2_data = PrettyTable(list(zip(v2_df['measurement'], v2_df['content'], v2_df['error'])), entries_per_column=len(v2_df['error']), extra_header=['Messung [1]', 'Gehalt [%]', 'Absoluter Fehler [%]'])
Explanation: <np />
Eisengehalt
Von einer Legierung soll der Eisengehalt bestimmt werden. Dazu wurden verschiedene Proben gemacht und ihr Gehalt bestummen. Durch Mittelung soll der wahre Wert angenähert werden.
Messwerte
End of explanation
# Plot all data
ax = v2_df.plot(yerr=v2_df['error'], kind='scatter', x='measurement', y='content', label='Eisengehalt in Legierung')
plt.axhline(y=v2_mean, axes=ax, color='purple', label='Ungewichteter Mittelwert')
plt.axhline(y=v2_weightedmean, axes=ax, color='red', label='Gewichteter Mittelwert')
plt.xlabel('Messung [1]')
plt.xlim([0, len(v2_df['content']) + 1])
plt.ylabel('Eisengehalt [%]')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.show()
#plt.savefig('eisengehalt.pgf')
Explanation: {{v2_data}}
Einfacher Mittelwert
Der einfache Mittelwert und sein Fehler ergeben sich analog zu Aufgabe 1.
<center>
$\overline{x}$ = {{'{0:.2f}'.format(v2_mean)}}%
$s_{\overline{x}}$ = {{'{0:.2f}'.format(v2_sem)}}%
</center>
Gewichteter Mittelwert
Der gewichtete Mittelwert und sein Fehler werden als
<center>
$\overline{x}$ = $\frac{ \sum_{i=1}^n{ g_{\overline{x_i}} \cdot x_i } }{ \sum_{i=1}^n{ g_{\overline{x_i}} }{ } }$ = {{'{0:.2f}'.format(v2_weightedmean)}}%
$s_{\overline{x}}$ = $\frac{1}{\sqrt{\sum_{i=1}^n{g_{\overline{x_i}}}}}$ = {{'{0:.2f}'.format(v2_weightedsem)}}%
</center>
bestummen.
End of explanation
# Evaluate Data
# Read Data
v3_df = pd.read_csv('data/federkonstante.csv')
# Find best values with a fit
def spring(z, k, F0):
return k * z + F0
v3_values, v3_covar = curve_fit(spring, v3_df['z'], v3_df['F'])
v3_F_fit = [spring(z, v3_values[0], v3_values[1]) for z in v3_df['z']]
v3_slope, v3_intercept, v3_r_value, v3_p_value, v3_std_err = stats.linregress(v3_df['z'], v3_df['F'])
v3_data = PrettyTable(list(zip(v3_df['F'], v3_df['z'])), entries_per_column=len(v3_df['F']), extra_header=['Kraft [N]', 'Auslenkung [m]'])
Explanation: <np />
Federkonstante
Messwerte
End of explanation
# Plot all data
ax = v3_df.plot(kind='scatter', x='z', y='F', label='gemessene Zugkraft')
plt.plot(v3_df['z'], v3_F_fit, axes=ax, label='Zugkraft mit Value Fitting')
plt.xlabel('Auslenkung [m]')
plt.ylabel('Zugkraft [F]')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.show()
Explanation: {{v3_data}}
Rechnung mittels Taschenrechner
Lineare Regression
Mit dem Rechner
Die Steigung der Regressionsgeraden und somit die Federkonstante $k$ wird wie folgt erhalten:
<center>
$k$ = $\frac{\sum_{i=1}^{10}(x_i-\overline{x})(y_i-\overline{y})}{\sum_{i=1}^{10}(x_i-\overline{x})^2}$ = {{'{0:.2f}'.format(v3_values[0])}}$\frac{N}{m}$
</center>
Der zugehörige Achsenabschnitt und somit die Ruhekraft $F_0$ errechnet sich aus:
<center>
$F_0$ = $\overline{y} - k \cdot \overline{x}$ = {{'{0:.2f}'.format(v3_values[1])}}N
</center>
Die empirische Korrelation ist:
<center>
$r_{xy}$ = $\frac{\sum_{1}^{10}(x_i - \overline{x}) \cdot (y_i - \overline{y})}{\sqrt{\sum_{1}^{10}(x_i-\overline{x})^2 \cdot \sum_{1}^{10}(y_i-\overline{y})^2}}$ = {{'{0:.4f}'.format(v3_r_value)}}
</center>
mit zugehörigem Bestimmtheitsmass:
<center>
$R^{2}$ = $r_{xy}^2$ = {{'{0:.4f}'.format(v3_r_value**2)}}
</center>
Mit scipy
End of explanation
# Evaluate Data
np.seterr(all='ignore')
# Read Data
v4_df = pd.read_csv('data/pendel.csv')
def pendulum(t, A, l, f, d, y0):
return A * np.exp(-l*t)*np.sin(2*math.pi*f*t-d)+y0
v4_values, v4_covar = curve_fit(pendulum, v4_df['t'], v4_df['y'])
v4_y_fit = [pendulum(t, v4_values[0], v4_values[1], v4_values[2], v4_values[3], v4_values[4]) for t in v4_df['t']]
v4_valerr = np.sqrt(np.diag(v4_covar))
v4_data = PrettyTable(list(zip(v4_df['t'], v4_df['y'])), entries_per_column=23, extra_header=['Zeit [s]', 'Auslenkung [m]'])
Explanation: Der Fit errechnet die folgenden relevenanten Werte für das Experiment:
<center>
$k$ = {{'{0:.2f}'.format(v3_values[0])}}$\frac{N}{m}$
$F_0$ = {{'{0:.2f}'.format(v3_values[1])}}N
$r_{xy}$ = {{'{0:.4f}'.format(v3_r_value)}}
</center>
End of explanation
# Plot all data
ax = v4_df.plot(kind='scatter', x='t', y='y', label='gemessene Auslenkung')
plt.plot(v4_df['t'], v4_y_fit, axes=ax, label='Auslenkung mit Value Fitting')
plt.xlabel('Zeit [s]')
plt.ylabel('Auslenkung [m]')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.show()
# Read data
# Calculate parameters of fit
import functools
v5_df = pd.read_csv('data/tiefpass.csv')
v5_Ue = 4
v5_R = 500
def Ua(Ue, R, f, C):
return Ue / 2 / np.sqrt(1+(2*math.pi*f*C*R)**2)
def phi(R, f, C):
return np.arctan(-2*math.pi*f*R*C)/2/math.pi*360
v5_values_Ua, v5_covar_Ua = curve_fit(functools.partial(Ua, v5_Ue, v5_R), v5_df['f'], v5_df['Ua'].div(2))
v5_f_fit = np.linspace(10, v5_df['f'][len(v5_df['f'])-1], 1000)
v5_Ua_fit = [Ua(v5_Ue, v5_R, f, -v5_values_Ua[0]) * 2 for f in v5_f_fit]
v5_valerr_Ua = np.sqrt(np.diag(v5_covar_Ua))
v5_values_phi, v5_covar_phi = curve_fit(functools.partial(phi, v5_R), v5_df['f'], v5_df['phi'])
v5_phi_fit = [phi(v5_R, f, v5_values_phi[0]) for f in v5_f_fit]
v5_valerr_phi = np.sqrt(np.diag(v5_covar_phi))
v5_data = PrettyTable(list(zip(v5_df['f'], v5_df['Ua'], v5_df['phi'])), entries_per_column=len(v5_df['f']), extra_header=['Frequenz [Hz]', '$U_a$ [V]', 'Phase [\degree]'])
Explanation: Offset, Amplitude, Frequenz und Phase eines Pendels
Von einem Pendel ist die Auslenkung in y-Richtung zu verschiedenen Zeitpunkten $t_i$ bekannt.
Mithilfe der Methode der kleinsten Quadrate können Offset, Amplitude, Frequenz und Phase des Pendels bestimmt werden.
Die Funktion des Pendels welche mit dem Fit angenähert wird schreibt sich wie folgt:
<center>
$y(t)$ = $A\cdot exp(-\Gamma\cdot t)\cdot sin(2\cdot\pi\cdot f\cdot t-\delta)+y_0$
</center>
Messwerte
{{v4_data}}
Value Fitting
Mit der Methode der Chi-Quadrate (nichtlineare Regression) wurden durch scipy die folgenden besten Werte ermittelt:
<center>
$A$ = ({{'{0:.2f}'.format(v4_values[0])}} ± {{'{0:.2f}'.format(v4_valerr[0])}})m
$\Gamma$= ({{'{0:.2f}'.format(v4_values[1])}} ± {{'{0:.4f}'.format(v4_valerr[1])}})$\frac{1}{s}$
$f$ = ({{'{0:.2f}'.format(v4_values[2])}} ± {{'{0:.4f}'.format(v4_valerr[2])}})Hz
$\delta$ = ({{'{0:.2f}'.format(v4_values[3])}} ± {{'{0:.2f}'.format(v4_valerr[3])}})
$y_0$ = ({{'{0:.2f}'.format(v4_values[4])}} ± {{'{0:.2f}'.format(v4_valerr[4])}})m
</center>
End of explanation
from IPython.display import Image
Image(filename='images/rc-lowpass.png')
Explanation: <np />
Tiefpass
Am Eingang eines RC-Tiefpassfilters wurde eine sinusförmige Wechselspannung mit einer Amplitude
$U_E = 4 V_{pp}$ und einer variabler Frequenz angelegt. Gemessen wurde die Ausgangsspannung $U_A$
sowie die Phasenverschiebung $\varphi$ in Abhängigkeit der Frequenz $f$.
Der Widerstand $R$ ist mit $R=500\Omega$ beziffert.
End of explanation
# Plot all data
ax = v5_df.plot(kind='scatter', x='f', y='Ua', label='gemessene Spannunsgwerte', logx=True)
plt.plot(v5_f_fit, v5_Ua_fit, axes=ax, label='Spannung mit Value Fit')
plt.xlabel('Frequenz [Hz]')
plt.ylabel('Spannung [V]')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.show()
# Plot all data
ax = v5_df.plot(kind='scatter', x='f', y='phi', label='gemessene Phase', logx=True)
plt.plot(v5_f_fit, v5_phi_fit, axes=ax, label='Phase mit Value Fit')
plt.xlabel('Frequenz [Hz]')
plt.ylabel('Phase [$^{\circ}$]')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.show()
Explanation: Messwerte
<center>
$U_e$ = {{v5_Ue}}$V_{pp}$ => ±{{v5_Ue / 2}}$\hat{V}$
$R$ = {{v5_R}}$\Omega$
</center>
{{v5_data}}
Berechnung von C
Die Kapazität C kann durch zwei verschiedene Funktionen bestimmt werden:
<center>
$Û_a$ = $\frac{Û_e}{\sqrt{1+(2\pi fCR)^2}}$
$\phi$ = $\arctan(-\omega RC)$
</center>
Die Kapazität kann mit einem Fit an die Ausgangsspannung $U_a$ auf
<center>
$C$ = {{'{0:.2f}'.format(-v5_values_Ua[0] * 1e9)}}nF
$s_C$ = {{'{0:.2f}'.format(v5_valerr_Ua[0] / 1e-9)}}nF
</center>
und mit einem Fit an die Phase $U_a$ auf
<center>
$C$ = {{'{0:.2f}'.format(v5_values_phi[0] * 1e9)}}nF
$s_C$ = {{'{0:.2f}'.format(v5_valerr_phi[0] / 1e-9)}}nF
</center>
bestimmt werden.
End of explanation |
15,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Watch Me Code 3
Step1: Map Pins
Step2: Choropleths
Choropleths are cartographic overlays based on boundries defined in a geo JSON file. | Python Code:
! pip install folium
import folium
import pandas as pd
import random
# we need to center the map in the middle of the US. I googled for the location.
CENTER_US = (39.8333333,-98.585522)
london = (51.5074, -0.1278)
map = folium.Map(location=CENTER_US, zoom_start=4)
map
Explanation: Watch Me Code 3: Mapping with Folium
Folium is a Python wrapper library for the OpenStreetMaps api. It allows you to place data on a map in a variety of ways.
End of explanation
# read in a data file of IP address to locations.
data = pd.read_csv('https://raw.githubusercontent.com/mafudge/datasets/master/clickstream/ip_lookup.csv')
data.sample(5)
from IPython.display import display
# Let's place each location on the map
for row in data.to_records():
pos = (row['ApproxLat'],row['ApproxLng'])
marker = folium.Marker(location=pos,
popup=f"{row['City']},{row['State']}"
)
map.add_child(marker)
display(map)
# Same thing with a different icon and colors. Icons come from http://fontawesome.io/icons/ but its an older version.
colors = ['red', 'blue', 'green', 'purple', 'orange', 'darkred',
'lightred', 'beige', 'darkblue', 'darkgreen', 'cadetblue',
'darkpurple', 'pink', 'lightblue', 'lightgreen',
'gray', 'black', 'lightgray']
for row in data.to_records():
pos = (row['ApproxLat'],row['ApproxLng'])
marker = folium.Marker(location=pos,
popup="%s, %s" % (row['City'],row['State']),
icon = folium.Icon(color = random.choice(colors), icon='user')
)
map.add_child(marker)
map
# There are other map tiles available. See https://folium.readthedocs.io/en/latest/quickstart.html
# Instead of Markers we use circles colors are HTML color codes http://htmlcolorcodes.com/
CENTER_US = (39.8333333,-98.585522)
map2 = folium.Map(location=CENTER_US, zoom_start=4)
for row in data.to_records():
map2.add_child(folium.CircleMarker(location=(row['ApproxLat'],row['ApproxLng']),
popup=row['City'], radius=10, color='#0000FF', fill_color='#FF3333'))
map2
Explanation: Map Pins
End of explanation
# State level geo-json overlay choropleth
CENTER_US = (39.8333333,-98.585522)
state_geojson = 'WMC3-us-states.json'
map3 = folium.Map(location=CENTER_US, zoom_start=4, tiles=' Open Street Map')
map3.choropleth(geo_data=state_geojson)
map3
states = pd.read_csv('https://raw.githubusercontent.com/jasonong/List-of-US-States/master/states.csv')
state_counts = pd.DataFrame( {'Counts' : data['State']. value_counts() } ).sort_index()
state_counts['StateCode'] = state_counts.index
state_data = states.merge(state_counts, how="left", left_on='Abbreviation', right_on='StateCode')
state_data = state_data[['Abbreviation','Counts']]
state_data = state_data.fillna(0)
state_data
CENTER_US = (39.8333333,-98.585522)
state_geojson = 'WMC3-us-states.json'
map3 = folium.Map(location=CENTER_US, zoom_start=4, tiles=' Open Street Map')
folium.Choropleth(geo_data=state_geojson,data=state_data, columns=['Abbreviation','Counts'],
key_on ='feature.id', fill_color='BuGn', legend_name='Website Visitors').add_to(map3)
map3
# Here's a more straigtforward example with unemployment data:
unemployment = pd.read_csv('https://raw.githubusercontent.com/wrobstory/vincent/master/examples/data/US_Unemployment_Oct2012.csv')
state_geojson = 'WMC3-us-states.json'
map4 = folium.Map(location=CENTER_US, zoom_start=4, tiles=' Open Street Map')
folium.Choropleth(geo_data=state_geojson,data=unemployment,
columns=['State','Unemployment'], key_on ='feature.id', fill_color='YlGn',
legend_name='2012 US Unemployment Rate %').add_to(map4)
map4
Explanation: Choropleths
Choropleths are cartographic overlays based on boundries defined in a geo JSON file.
End of explanation |
15,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supplemental Information
Step1: Data import
Length distribution of homozygosity tracts
Step2: Fluctuation assay
Luria-Delbrück fluctuation assay.
Step3: Figure 5 - Loss of heterozygosity | Python Code:
# Load external dependencies
from setup import *
# Load internal dependencies
import config,plot,utils
%load_ext autoreload
%autoreload 2
%matplotlib inline
Explanation: Supplemental Information:
"Clonal heterogeneity influences the fate of new adaptive mutations"
Ignacio Vázquez-García, Francisco Salinas, Jing Li, Andrej Fischer, Benjamin Barré, Johan Hallin, Anders Bergström, Elisa Alonso-Pérez, Jonas Warringer, Ville Mustonen, Gianni Liti
Figure 5
This IPython notebook is provided for reproduction of Figure 5 of the paper. It can be viewed by copying its URL to nbviewer and it can be run by opening it in binder.
End of explanation
# Load data
loh_length_df = pd.read_csv(dir_data+'seq/loh/homozygosity_length.csv')
loh_length_df = loh_length_df.set_index("50kb_bin_center")
loh_length_df = loh_length_df.reindex(columns=['HU','RM','YPD'])
loh_length_df.head()
Explanation: Data import
Length distribution of homozygosity tracts
End of explanation
# Read csv file containing the competition assay data
loh_fluctuation_df = pd.read_csv(dir_data+'fluctuation/fluctuation_assay_rates.csv')
loh_fluctuation_df = loh_fluctuation_df.sort_values('background', ascending=False)
loh_fluctuation_df = loh_fluctuation_df.groupby(['background','environment'],sort=False)[['mean_LOH_rate','lower_LOH_rate','upper_LOH_rate']].mean()
loh_fluctuation_df = loh_fluctuation_df.ix[['WA/WA','NA/NA','WA/NA']].unstack('background')
loh_fluctuation_df = loh_fluctuation_df.ix[['HU','RM','YPD']]
loh_fluctuation_df
Explanation: Fluctuation assay
Luria-Delbrück fluctuation assay.
End of explanation
fig = plt.figure(figsize=(4,6))
grid = gridspec.GridSpec(nrows=3, ncols=2, height_ratios=[15, 7, 5], hspace=0.7, wspace=0.3)
gs = {}
gs['length'] = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=grid[0,0])
gs['fluctuation'] = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=grid[0,1])
gs[('loh','WAxNA_F12_1_HU_3')] = gridspec.GridSpecFromSubplotSpec(7, 1, subplot_spec=grid[1:2,:], hspace=0)
gs[('loh','WAxNA_F12_2_RM_1')] = gridspec.GridSpecFromSubplotSpec(5, 1, subplot_spec=grid[2:3,:], hspace=0)
### Left panel ###
ax = plt.subplot(gs['length'][:])
ax.text(-0.185, 1.055, 'A', transform=ax.transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
data = loh_length_df.rename(columns=config.selection['short_label'])
kwargs = {
'color': [config.selection['color'][e] for e in loh_length_df.columns]
}
plot.loh_length(data, ax, **kwargs)
### Right panel ###
ax = plt.subplot(gs['fluctuation'][:])
ax.text(-0.2, 1.05, 'B', transform=ax.transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
data = loh_fluctuation_df['mean_LOH_rate']
kwargs = {
'yerr': loh_fluctuation_df[['lower_LOH_rate','upper_LOH_rate']].T.values,
'color': [config.background['color'][b] for b in loh_fluctuation_df['mean_LOH_rate'].columns]
}
plot.loh_fluctuation(data, ax, **kwargs)
# Axes limits
for ax in fig.get_axes():
ax.xaxis.label.set_size(6)
ax.yaxis.label.set_size(6)
ax.tick_params(axis='both', which='major', size=3, labelsize=6)
ax.tick_params(axis='both', which='minor', size=2, labelsize=4)
plot.save_figure(dir_paper+'figures/figure5/figure5')
plt.show()
Explanation: Figure 5 - Loss of heterozygosity
End of explanation |
15,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data from <font color = "red"> the "IMDB5000"</font> database
Step1: Scrape data from <font color = "red"> DOUBAN.COM </font>
Step2: 1. Preliminary data visualization and analysis
Step3: 1.1 Visulze the gross distribution of rating from <font color = "red">IMDB (x-axis)</font> and <font color = "red"> Douban (y-axis)</font>
Step4: 1.2 Is it necessary to recenter(scale) <font color = "red">IMDB score </font> and <font color = "red"> Douban score</font>?
Step5: 1.3 <font color = "red">Normalize</font> IMDB and Douban rating scores
Step6: 1.4 Visulze Features | Python Code:
imdb_dat = pd.read_csv("movie_metadata.csv")
imdb_dat.info()
Explanation: Data from <font color = "red"> the "IMDB5000"</font> database
End of explanation
import requests
import re
from bs4 import BeautifulSoup
import time
import string
# return the douban movie rating that matches the movie name and year
# read in the movie name
def doubanRating(name):
movie_name = name.decode('gbk').encode('utf-8')
url_head = 'http://movie.douban.com/subject_search'
pageload = {'search_text': movie_name}
r = requests.get(url_head,params = pageload)
soup = BeautifulSoup(r.text,'html.parser')
first_hit = soup.find_all(class_= 'nbg')
try:
r2_link = first_hit[0].get('href')
# sometime douban returns items like celebrity instead of movies
if 'subject' not in r2_link:
r2_link = first_hit[1].get('href')
r2 = requests.get(r2_link)
soup2 = BeautifulSoup(r2.text,'html.parser')
title = soup2.find(property = "v:itemreviewed")
title = title.get_text() # in unicode
# remove Chinese characters
title = ' '.join((title.split(' '))[1:])
title = filter(lambda x:x in set(string.printable),title)
flag = True
if title != name:
print "Warning: name may not match"
flag = False
year = (soup2.find(class_='year')).get_text()# in unicode
rating = (soup2.find(class_="ll rating_num")).get_text() # in unicode
num_review = (soup2.find(property="v:votes")).get_text()
return [title, year, rating,num_review,flag]
except:
print "Record not found for: "+name
return [name, None, None, None, None]
#%%2. Store scrapped data
dataset = pd.read_csv("movie_metadata.csv")
total_length = 5043
#first_query = 2500
res = pd.DataFrame(columns = ('movie_title','year','rating','num_review','flag'))
for i in xrange(1,total_length):
name = dataset['movie_title'][i].strip().strip('\xc2\xa0')
res.loc[i] = doubanRating(name)
print "slowly and finally done %d query"%i
time.sleep(10)
if (i%50==0):
res.to_csv("douban_movie_review.csv")
print "saved until record: %d"%i
Explanation: Scrape data from <font color = "red"> DOUBAN.COM </font>
End of explanation
douban_dat = pd.read_csv("douban_movie_review.csv")
douban_dat.rename(columns = {'movie_title':'d_movie_title','year':'d_year','rating':'douban_score','num_review':'dnum_review','flag':'dflag'},inplace = True)
douban_dat.info()
res_dat = pd.concat([imdb_dat,douban_dat],axis = 1)
res_dat.info()
Explanation: 1. Preliminary data visualization and analysis
End of explanation
# 1. visulize the gross distribution of ratings from imdb(x-axis) and douban(y-axis)
import seaborn as sns
g = sns.jointplot(x = 'imdb_score',y = 'douban_score',data = res_dat)
g.ax_joint.set(xlim=(1, 10), ylim=(1, 10))
Explanation: 1.1 Visulze the gross distribution of rating from <font color = "red">IMDB (x-axis)</font> and <font color = "red"> Douban (y-axis)</font>
End of explanation
# plot distribution and bar graphs(significantly different)
from scipy import stats
nbins = 15
fig,axes = plt.subplots(nrows = 1,ncols = 2, figsize = (10,8))
ax0,ax1 = axes.flatten()
ax0.hist([res_dat.douban_score,res_dat.imdb_score],nbins, histtype = 'bar',label = ["Douban","IMDB"])
ax0.set_title('The distribution of movie ratings')
ax0.set_xlabel('Rating')
ax0.set_ylabel('Count')
ax0.legend()
imdb_score = np.mean(res_dat.imdb_score)
douban_score = np.mean(res_dat.douban_score)
ax1.bar([0,1],[imdb_score, douban_score], yerr = [np.std(res_dat.imdb_score),np.std(res_dat.douban_score)],
align = 'center',color = ['green','blue'], ecolor = 'black')
ax1.set_xticks([0,1])
ax1.set_xticklabels(['IMDB','Douban'])
ax1.set_ylabel('Score')
_,p = stats.ttest_rel(res_dat['imdb_score'], res_dat['douban_score'],nan_policy = 'omit')
ax1.set_title('A comparison of ratings\n'+'t-test: p = %.4f***'%p)
#fig.tight_layout()
plt.show()
# any significant differences
Explanation: 1.2 Is it necessary to recenter(scale) <font color = "red">IMDB score </font> and <font color = "red"> Douban score</font>?
End of explanation
from sklearn import preprocessing
data = res_dat.dropna()
print " delete null values, the remaining data is",data.shape
data.loc[:,'scaled_imdb'] = preprocessing.scale(data['imdb_score'])
data.loc[:,'scaled_douban'] = preprocessing.scale(data['douban_score'])
#stats.ttest_rel(data['scaled_imdb'], data['scaled_douban'],nan_policy = 'omit')
from scipy.stats import norm, lognorm
import matplotlib.mlab as mlab
fig,axes = plt.subplots(nrows = 1,ncols = 2, figsize = (10,8))
ax0,ax1 = axes.flatten()
ax0.plot(data['scaled_imdb'],data['scaled_douban'],'ro')
ax0.set_title('Normalized Scores')
ax0.set_xlabel('Scaled IMDB score')
ax0.set_ylabel('Scaled Douban score')
data.loc[:,'rating_diff'] = data['scaled_imdb'] - data['scaled_douban']
(mu,sigma) = norm.fit(data['rating_diff'])
_,bins,_ = ax1.hist(data['rating_diff'],60,normed = 1, histtype = 'bar',alpha = 0.75)
ax1.plot(bins, mlab.normpdf(bins,mu,sigma),'r--',linewidth = 2)
ax1.set_xlabel('IMDB_score - Douban_score')
ax1.set_ylabel('percentage')
ax1.set_title('Rating difference Distribution')
fig.tight_layout()
plt.show()
Explanation: 1.3 <font color = "red">Normalize</font> IMDB and Douban rating scores
End of explanation
data.describe()
data.describe(include = ['object'])
ind = data['rating_diff'].argmin()
print data.iloc[ind].movie_title
print data.iloc[ind].scaled_imdb
print data.iloc[ind].scaled_douban
print data.iloc[ind].title_year
print data.iloc[ind].movie_imdb_link
print data.iloc[ind].d_year
print data.iloc[ind].douban_score
print data.iloc[ind].imdb_score
data.columns
# 2. Predict differences in ratings
res_dat['diff_rating'] = res_dat['douban_score']-res_dat['imdb_score']
# 2.1. covert categorical variable Genre to Dummy variables
# only extract the first genre out of the list to simplify the problem
res_dat['genre1'] = res_dat.apply(lambda row:(row['genres'].split('|'))[0],axis = 1)
#res_dat['genre1'].value_counts()
# Because there are 21 genres, here we only choose the top 7 to convert to index
top_genre = ['Comedy','Action','Drama','Adventure','Crime','Biography','Horror']
# The rest of genre types we just consider them as others
res_dat['top_genre'] = res_dat.apply(lambda row:row['genre1'] if row['genre1'] in top_genre else 'Other',axis =1)
#select num_user_for_reviews ,director_facebook_likes ,actor_1_facebook_likes ,gross , genres,
#budget,# dnum_review # for EDA
res_subdat = res_dat[['top_genre','num_user_for_reviews','director_facebook_likes','actor_1_facebook_likes','gross','budget','dnum_review','diff_rating']]
res_subdat = pd.get_dummies(res_subdat,prefix =['top_genre'])
#res_dat = pd.get_dummies(res_dat,prefix = ['top_genre'])
res_subdat.shape
# create a subset for visualization and preliminary analysis
col2 = [u'num_user_for_reviews', u'director_facebook_likes',
u'actor_1_facebook_likes', u'gross', u'budget', u'dnum_review', u'top_genre_Action', u'top_genre_Adventure',
u'top_genre_Biography', u'top_genre_Comedy', u'top_genre_Crime',
u'top_genre_Drama', u'top_genre_Horror', u'top_genre_Other',u'diff_rating']
res_subdat = res_subdat[col2]
# a subset for plotting correlation
col_cat = [u'gross', u'budget', u'dnum_review',u'num_user_for_reviews',u'top_genre_Action', u'top_genre_Adventure',
u'top_genre_Biography', u'top_genre_Comedy', u'top_genre_Crime',
u'top_genre_Drama', u'top_genre_Horror', u'diff_rating']
res_subdat_genre = res_subdat[col_cat]
# show pair-wise correlation between differences in ratings and estimators
import matplotlib.pylab as plt
import numpy as np
corr = res_subdat_genre.corr()
sns.set(style = "white")
f,ax = plt.subplots(figsize=(11,9))
cmap = sns.diverging_palette(220,10,as_cmap=True)
mask = np.zeros_like(corr,dtype = np.bool)
sns.heatmap(corr,mask = mask,cmap = cmap, vmax=.3,square = True, linewidths = .5,
cbar_kws = {"shrink": .5},ax = ax)
# prepare trainning set and target set
col_train = col2[:len(col2)-1]
col_target = col2[len(col2)-1]
#cl_res_subdat = res_subdat.dropna(axis =0)
cl_res_subdat.shaperating_diff
# 2.2 Use Random Forest Regressor for prediction
X_cat = res_subdat.ix[:,'top_genre_Action':'top_genre_Other']
num_col = []
for i in res_dat.columns:
if res_dat[i].dtype != 'object':
num_col.append(i)
X_num = res_dat[num_col]
X = pd.concat([X_cat,X_num],axis = 1)
X = X.dropna(axis = 0)
y = X['diff_rating']
X = X.iloc[:,:-1]
X.drop(['imdb_score','douban_score'],axis = 1,inplace = True)
from sklearn.model_selection import train_test_split
# METHOD 1: BUILD randomforestregressor
X_train,X_val,y_train,y_val = train_test_split(X,y,test_size = 0.1,random_state = 42)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 500)
forest = rf.fit(X_train, y_train)
score_r2 = rf.score(X_val,y_val)
# print: R-sqr
print score_r2
rf_features = sorted(zip(map(lambda x: round(x, 4), rf.feature_importances_), X.columns),reverse = True)
import matplotlib.pyplot as plt;
imps,feas = zip(*(rf_features[0:4]+rf_features[6:12]))
ypos = np.arange(len(feas))
plt.barh(ypos,imps,align = 'center',alpha = 0.5)
plt.yticks(ypos,feas)
plt.xlabel('Feature Importance')
plt.subplot(1,2,1)
plt.plot(y_train,rf.predict(X_train),'o')
plt.xlabel('Training_y')
plt.ylabel('Predict_y')
plt.xlim(-6,6)
plt.ylim(-6,6)
plt.subplot(1,2,2)
plt.plot(y_val,rf.predict(X_val),'o')
plt.xlabel('val_y')
plt.ylabel('Predict_y')
plt.xlim(-3,4)
plt.ylim(-3,4)
X.columns
# Lasso method
from sklearn.linear_model import Lasso
Lassoreg = Lasso(alpha = 1e-4,normalize = True,random_state = 42)
Lassoreg.fit(X,y)
score_r2 = Lassoreg.score(X_val,y_val)
print score_r2
Ls_features = sorted(zip(map(lambda x:round(x,4),Lassoreg.coef_),X.columns))
print Ls_features
y_val_rf = rf.predict(X_val)
y_val_Ls = Lassoreg.predict(X_val)
y_val_pred = (y_val_rf+y_val_Ls)/2
from sklearn.metrics import r2_score
print r2_score(y_val,y_val_pred)
import matplotlib.pyplot as plt;
imps,feas = zip(*(Ls_features[0:4]+Ls_features[-4:]))
ypos = np.arange(len(feas))
plt.barh(ypos,imps,align = 'center',alpha = 0.5)
plt.yticks(ypos,feas)
plt.xlabel('Feature Importance (Coefficient)')
plt.subplot(1,2,1)
plt.plot(y_train,Lassoreg.predict(X_train),'o')
plt.xlabel('Training_y')
plt.ylabel('Predict_y')
plt.xlim(-6,6)
plt.ylim(-6,6)
plt.subplot(1,2,2)
plt.plot(y_val,Lassoreg.predict(X_val),'o')
plt.xlabel('val_y')
plt.ylabel('Predict_y')
plt.xlim(-3,4)
plt.ylim(-3,4)
Explanation: 1.4 Visulze Features
End of explanation |
15,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Construyendo AutoEncoders sobre MNIST con Learninspy
<img style="display
Step1: Carga de datos
Step2: <h2><center>## Modelado con un AutoEncoder ##</center></h2>
Selección de parámetros para el modelo
Step3: Selección de parámetros para la optimización
Step4: Fit de AE
Step5: Resultados visuales | Python Code:
from learninspy.core.model import NetworkParameters, NeuralNetwork
from learninspy.core.autoencoder import AutoEncoder, StackedAutoencoder
from learninspy.core.optimization import OptimizerParameters
from learninspy.core.stops import criterion
from learninspy.utils.data import StandardScaler, LocalLabeledDataSet, split_data, label_data, load_mnist
from learninspy.utils.evaluation import RegressionMetrics
from learninspy.utils.plots import *
from learninspy.context import sc
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Construyendo AutoEncoders sobre MNIST con Learninspy
<img style="display: inline;" src="../../docs/img/Learninspy-logo_grande2.png" width="300" />
Dependencias
End of explanation
print "Cargando base de datos de entrenamiento..."
t, train, valid = load_mnist()
# Tiro el primer dataset
t = None
train = LocalLabeledDataSet(train)
rows, cols = train.shape
print "Dimensiones: ", rows, " x ", cols
valid = LocalLabeledDataSet(valid)
rows, cols = valid.shape
print "Dimensiones: ", rows, " x ", cols
Explanation: Carga de datos
End of explanation
units = [784, 100]
net_params = NetworkParameters(units_layers=units, activation=['ReLU', 'Sigmoid'],
strength_l2=1e-4, strength_l1=3e-5,
classification=False)
Explanation: <h2><center>## Modelado con un AutoEncoder ##</center></h2>
Selección de parámetros para el modelo
End of explanation
local_stops = [criterion['MaxIterations'](10),
criterion['AchieveTolerance'](0.95, key='hits')]
global_stops = [criterion['MaxIterations'](20),
criterion['AchieveTolerance'](0.95, key='hits')]
opt_params = OptimizerParameters(algorithm='Adadelta',
options={'step-rate': 1, 'decay': 0.995, 'momentum': 0.7, 'offset': 1e-8},
stops=local_stops, merge_criter='w_avg')
Explanation: Selección de parámetros para la optimización
End of explanation
# Crear AutoEncoder simple
ae = AutoEncoder(net_params, dropout_in=0.0)
# Fit
hits_valid = ae.fit(train, valid, mini_batch=256, parallelism=4, valid_iters=10,
stops=global_stops, optimizer_params=opt_params, keep_best=False, reproducible=True)
Explanation: Fit de AE
End of explanation
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(valid.data[i].features.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
predict = ae.predict(valid.data[i].features).matrix
plt.imshow(predict.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
from matplotlib.gridspec import GridSpec
n_units = 100 # (*)
n_input = 784 # (*)
plt.figure(figsize=(20, 20))
width, height = (10, 10) # (*)
gs = GridSpec(width, height)
w = ae.list_layers[0].weights.matrix # (*) np.array
# Reescala
w = w - np.mean(w)
x_max = []
for i in range(n_units):
x_max.append(map(lambda j: w[i, j] / float(np.sqrt(sum([w_i_j ** 2 for w_i_j in w[i, ]]))),
range(n_input)))
for i in range(width):
for j in range(height):
x = x_max[i*(height) + j]
ax = plt.subplot(gs[i, j])
ax.imshow(np.array(x).reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: Resultados visuales
End of explanation |
15,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo for prox_elasticnet package
Below we import prox_elasticnet along with some other useful packages.
Step1: Diabetes dataset
Import the diabetes dataset which is included in sklearn.
It consists of 10 physiological variables (age, sex, weight, blood pressure) measured on 442 patients, and an indication of disease progression after one year.
Step2: Our goal is to fit a linear model using Elastic Net regularisation, which predicts the disease progression for a given patient's physiological variables.
We separate the data into training and test sets (80% train/20% test)
Step3: First we run the basic ElasticNet model with the default parameters.
Step4: The model coefficients are accessed as follows
Step5: The package also provides ElasticNetCV which chooses the regularisation parameters (alpha and l1_ratio) which yield the best mean-squared error.
Step6: We can see that there been a significant increase in the coefficient of determination ($R^2$) on the test set when compared to the previous model (although it is still rather poor). The alpha and l1_ratio values that have been selected through cross-validation are accessed as follows
Step7: The mean-squared error is in fact available for all combinations of alpha, l1_ratio and each fold of cross-validation. As an example, we plot the mean-squared error for the optimal l1_ratio = 0.8 as a function of alpha. We average over the three folds of cross validation. | Python Code:
from prox_elasticnet import ElasticNet, ElasticNetCV
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
np.random.seed(319159)
Explanation: Demo for prox_elasticnet package
Below we import prox_elasticnet along with some other useful packages.
End of explanation
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
X = diabetes.data
y = diabetes.target
Explanation: Diabetes dataset
Import the diabetes dataset which is included in sklearn.
It consists of 10 physiological variables (age, sex, weight, blood pressure) measured on 442 patients, and an indication of disease progression after one year.
End of explanation
prop_train = 0.8
n_pts = len(y)
n_train = np.floor(n_pts * prop_train).astype(int)
n_test = n_pts - n_train
ix = np.arange(n_pts)
np.random.shuffle(ix)
train_ix = ix[0:n_train]
test_ix = ix[n_train:n_pts]
X_train = X[train_ix,:]
y_train = y[train_ix]
X_test = X[test_ix,:]
y_test = y[test_ix]
Explanation: Our goal is to fit a linear model using Elastic Net regularisation, which predicts the disease progression for a given patient's physiological variables.
We separate the data into training and test sets (80% train/20% test)
End of explanation
model = ElasticNet().fit(X_train, y_train)
y_pred = model.predict(X_test)
print("The coefficient of determination for this model is: {}".format(model.score(X_test,y_test)))
Explanation: First we run the basic ElasticNet model with the default parameters.
End of explanation
model.coef_
Explanation: The model coefficients are accessed as follows:
End of explanation
model_cv = ElasticNetCV(l1_ratio = np.arange(0.1,0.9,step=0.1)).fit(X_train, y_train)
y_pred_cv = model_cv.predict(X_test)
print("The coefficient of determination for this model is: {}".format(model_cv.score(X_test,y_test)))
Explanation: The package also provides ElasticNetCV which chooses the regularisation parameters (alpha and l1_ratio) which yield the best mean-squared error.
End of explanation
model_cv.alpha_
model_cv.l1_ratio_
Explanation: We can see that there been a significant increase in the coefficient of determination ($R^2$) on the test set when compared to the previous model (although it is still rather poor). The alpha and l1_ratio values that have been selected through cross-validation are accessed as follows:
End of explanation
plt.title("Cross-validation for l1_ratio = 0.8")
plt.plot(model_cv.alphas_[7],model_cv.mse_path_.mean(axis=2)[7])
plt.xlabel("alpha")
plt.ylabel("MSE")
plt.show()
Explanation: The mean-squared error is in fact available for all combinations of alpha, l1_ratio and each fold of cross-validation. As an example, we plot the mean-squared error for the optimal l1_ratio = 0.8 as a function of alpha. We average over the three folds of cross validation.
End of explanation |
15,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 1
Step1: 2D trajectory interpolation
The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
Explanation: Interpolation Exercise 1
End of explanation
trajectory = np.load('trajectory.npz')
x = trajectory['x']
y = trajectory['y']
t = trajectory['t']
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
Explanation: 2D trajectory interpolation
The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time:
t which has discrete values of time t[i].
x which has values of the x position at those times: x[i] = x(t[i]).
x which has values of the y position at those times: y[i] = y(t[i]).
Load those arrays into this notebook and save them as variables x, y and t:
End of explanation
newt = np.linspace(min(t),max(t),200)
sin_approx = interp1d(t, x, kind='cubic')
sin_approx2 = interp1d(t, x, kind='cubic')
newx = sin_approx(newt)
newy = sin_approx(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
Explanation: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays:
newt which has 200 points between ${t_{min},t_{max}}$.
newx which has the interpolated values of $x(t)$ at those times.
newy which has the interpolated values of $y(t)$ at those times.
End of explanation
plt.plot(newx, newy, marker='o', linestyle='', label='original data')
plt.plot(newx, newy, marker='.', label='interpolated');
plt.legend(loc=4);
plt.xlabel('$x(t)$')
plt.ylabel('$y(t)$');
plt.xlim(-0.7, 0.9)
plt.ylim(-0.7,0.9)
assert True # leave this to grade the trajectory plot
Explanation: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points:
For the interpolated points, use a solid line.
For the original points, use circles of a different color and no line.
Customize you plot to make it effective and beautiful.
End of explanation |
15,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Use XLA with tf.function
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Then define some necessary constants and prepare the MNIST dataset.
Step3: Finally, define the model and the optimizer. The model uses a single dense layer.
Step4: Define the training function
In the training function, you get the predicted labels using the layer defined above, and then minimize the gradient of the loss using the optimizer. In order to compile the computation using XLA, place it inside tf.function with jit_compile=True.
Step5: Train and test the model
Once you have defined the training function, define the model.
Step6: And, finally, check the accuracy | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
# In TF 2.4 jit_compile is called experimental_compile
!pip install tf-nightly
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
Explanation: Use XLA with tf.function
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/xla/tutorials/compile"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/compile.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/compile.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This tutorial trains a TensorFlow model to classify the MNIST dataset, where the training function is compiled using XLA.
First, load TensorFlow and enable eager execution.
End of explanation
# Size of each input image, 28 x 28 pixels
IMAGE_SIZE = 28 * 28
# Number of distinct number labels, [0..9]
NUM_CLASSES = 10
# Number of examples in each training batch (step)
TRAIN_BATCH_SIZE = 100
# Number of training steps to run
TRAIN_STEPS = 1000
# Loads MNIST dataset.
train, test = tf.keras.datasets.mnist.load_data()
train_ds = tf.data.Dataset.from_tensor_slices(train).batch(TRAIN_BATCH_SIZE).repeat()
# Casting from raw data to the required datatypes.
def cast(images, labels):
images = tf.cast(
tf.reshape(images, [-1, IMAGE_SIZE]), tf.float32)
labels = tf.cast(labels, tf.int64)
return (images, labels)
Explanation: Then define some necessary constants and prepare the MNIST dataset.
End of explanation
layer = tf.keras.layers.Dense(NUM_CLASSES)
optimizer = tf.keras.optimizers.Adam()
Explanation: Finally, define the model and the optimizer. The model uses a single dense layer.
End of explanation
@tf.function(jit_compile=True)
def train_mnist(images, labels):
images, labels = cast(images, labels)
with tf.GradientTape() as tape:
predicted_labels = layer(images)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=predicted_labels, labels=labels
))
layer_variables = layer.trainable_variables
grads = tape.gradient(loss, layer_variables)
optimizer.apply_gradients(zip(grads, layer_variables))
Explanation: Define the training function
In the training function, you get the predicted labels using the layer defined above, and then minimize the gradient of the loss using the optimizer. In order to compile the computation using XLA, place it inside tf.function with jit_compile=True.
End of explanation
for images, labels in train_ds:
if optimizer.iterations > TRAIN_STEPS:
break
train_mnist(images, labels)
Explanation: Train and test the model
Once you have defined the training function, define the model.
End of explanation
images, labels = cast(test[0], test[1])
predicted_labels = layer(images)
correct_prediction = tf.equal(tf.argmax(predicted_labels, 1), labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Prediction accuracy after training: %s" % accuracy)
Explanation: And, finally, check the accuracy:
End of explanation |
15,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's try to find the lag of asynchrony by looking at the cross-correlation.
Step1: Cross-correlation on the signals is a bad idea! Two many oscillations.
Instead, we should get the envelope of the signal and cross-correlate.
Step2: We will try to get the envelopse by taking absolute values and then passing a low-pass filter.
Step3: Calculate cross-correlation with FFT
Step4: On a large scale the envelopes seem aligned
Step5: Offset the original data | Python Code:
# the cross-correlation function in statsmodels does not use FFT so it is really slow
# from statsmodels.tsa.stattools import ccf
# res = ccf(ts1[1][200000:400000,1],ts2[1][200000:400000,1])
Explanation: Let's try to find the lag of asynchrony by looking at the cross-correlation.
End of explanation
# Warning - the envelope heights could be different for different cameras if there one is buffered
# Envelope based on the hilbert transform fails:
import scipy.signal.signaltools as sigtool
env = np.abs(sigtool.hilbert(ts1[1][:,1]))
#plt.plot(env)
Explanation: Cross-correlation on the signals is a bad idea! Two many oscillations.
Instead, we should get the envelope of the signal and cross-correlate.
End of explanation
plt.plot(np.abs(ts1[1][:,1]))
plt.title('Absolute value of the signal')
# Another unsuccessful way of getting the envelope:
# hilb = sigtool.hilbert(ts1[1][:,1])
# env = (ts1[1][:,1] ** 2 + hilb ** 2) ** 0.5
# plt.plot(env)
# Creating a Butterworth filter
b, a = signal.butter(4, 7./48000, 'low')
# filtering
# output_signal = signal.filtfilt(b, a, 2*ts1[1][:,1]*ts1[1][:,1])
output_signal1 = signal.filtfilt(b, a, np.abs(ts1[1][:,1]))
output_signal2 = signal.filtfilt(b, a, np.abs(ts2[1][:,1]))
plt.plot(np.sqrt(output_signal1[700000:1080000]))
plt.plot(np.sqrt(output_signal2[700000:1080000]))
plt.title('Zooming on the displacement')
Explanation: We will try to get the envelopse by taking absolute values and then passing a low-pass filter.
End of explanation
c = signal.fftconvolve(output_signal1[800000:1080000],output_signal2[800000:1080000][::-1], mode='full')
plt.plot(c[::-1])
print(c.shape[0])
print(c.argmax())
offset = c.shape[0] - c.argmax()- 280000
plt.plot(280000+offset,c[c.argmax()],'ro')
plt.title('Offset is ' + str (280000+offset))
plt.plot(np.sqrt(output_signal1[800000:1080000]))
plt.plot(np.sqrt(output_signal2[800000+offset:(1080000+offset)]))
plt.title('Aligned envelopes')
Explanation: Calculate cross-correlation with FFT:
End of explanation
plt.plot(np.sqrt(output_signal1[np.abs(offset):]))
plt.plot(np.sqrt(output_signal2[:]))
Explanation: On a large scale the envelopes seem aligned:
End of explanation
plt.plot(ts1[1][700000:1200000,1])
plt.plot(ts2[1][700000+offset:1200000+offset,1])
plt.title('Aligned signals')
Explanation: Offset the original data:
End of explanation |
15,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch =
n_batches =
# Keep only enough characters to make full batches
arr =
# Reshape into n_seqs rows
arr =
for n in range(0, arr.shape[1], n_steps):
# The features
x =
# The targets, shifted by one
y =
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs =
targets =
# Keep probability placeholder for drop out layers
keep_prob =
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm =
# Add dropout to the cell outputs
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
initial_state =
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
15,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercício 01
Step1: Exercício 02
Step2: Exercício 03
Step3: Exercício 04
Step4: Exercício 05
Step5: Exercício 06
Step6: Exercício 07
Step7: Exercício 08 | Python Code:
# Contador de palavras
import codecs
from collections import defaultdict
def ContaPalavras(texto):
for palavra, valor in ContaPalavras('exemplo.txt').iteritems():
print (palavra, valor)
Explanation: Exercício 01: Crie uma função ContaPalavras que receba como entrada o nome de um arquivo de texto e retorne a frequência de cada palavra contida nele.
End of explanation
# converter data no formato 01-MAI-2000 em 01-05-2000
def ConverteData(data):
print (ConverteData('01-MAI-2000'))
Explanation: Exercício 02: Crie uma função ConverteData() que recebe uma string no formato DIA-MES-ANO e retorne uma string no formato DIA-MES_NUMERO-ANO. Exemplo:
'01-MAI-2000' => '01-05-2000'.
Você pode separar a string em uma lista de strings da seguinte maneira:
data = '01-MAI-2000'
lista = data.split('-')
print lista # ['01','MAI','2000']
E pode juntar novamente usando join:
lista = ['01','05', '2000']
data = '-'.join(lista)
print data # '01-05-2000'
End of explanation
# crie um dicionário em que a chave é um número de 2 a 12
# e o valor é uma lista de combinações de dois dados que resulta na chave
Dados =...
for chave, valor in Dados.iteritems():
print (chave, valor)
Explanation: Exercício 03: Crie um dicionário chamado Dados que tenha como chave um número de 2 até 12 e o valor seja uma lista contendo todas as combinações dos valores de dois dados que resulta nessa chave.
End of explanation
# crie um pequeno dicionário de inglês para português e use para traduzir frases simples
import codecs
def Traduz(texto):
print (Traduz('exemplo.txt'))
Explanation: Exercício 04: Crie um dicionário onde as chaves são palavras em português e os valores sua tradução para o inglês. Use todas as palavras do texto do exercício 01.
Crie uma função Traduz() que recebe o nome do arquivo texto como parâmetro e retorna uma string com a tradução.
End of explanation
# cifra de César
import string
def ConstroiDic(n):
def Codifica(frase, n):
l = Codifica('Vou tirar dez na proxima prova', 5)
print (l)
print (Codifica(l,-5))
Explanation: Exercício 05: A Cifra de César é uma forma simples de criptografar um texto. O procedimento é simples:
dado um número $n$
crie um mapa de substituição em que cada letra será substituida pela n-ésima letra após ela no alfabeto. Ex.:
n = 1
A -> B
B -> C
...
n = 2
A -> C
B -> D
...
A Codificação é feita substituindo cada letra da frase pelo correspondente do mapa.
Para Decodificar uma frase, basta criar um mapa utilizando $-n$ ao invés de $n$.
Crie uma função ConstroiDic() que recebe um valor n como entrada e cria um mapa de substituição. Utilize a constante string.ascii_letters para obter todas as letras do alfabeto.
Note que o mapa é cíclico, ou seja, para n=1, a letra Z tem que ser substituida pela letra A. Isso pode ser feito utilizando o operador '%'.
Crie uma função Codifica() que recebe como parâmetros uma string contendo uma frase e um valor para n, essa função deve construir o dicionário e retornar a frase codificada.
Para Decodificar o texto, basta chamar a função Codifica() pasando -n como parâmetro.
End of explanation
# tabela periodica
Explanation: Exercício 06: Faça uma função que leia a tabela periódica de um arquivo (você construirá esse arquivo) e armazene em um dicionário.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('BZzNBNoae-Y', 640,480)
# velha a fiar
Explanation: Exercício 07: Assista o vídeo abaixo e crie uma lista com os personagens da letra da música.
Em seguida, utilizando dois laços for percorra essa lista e escreva a letra da música.
End of explanation
# dec - romano - dec
def DecRoman(x):
def RomanDec(r):
r = DecRoman(1345)
x = RomanDec(r)
print (r,x)
Explanation: Exercício 08: Faça uma função que converta um número decimal para romano. Para isso construa um dicionário em que as chaves são os números decimais e os valores o equivalente em romano.
O algoritmo funciona da seguinte forma:
Para cada valor decimal do dicionário, do maior para o menor
Enquanto eu puder subtrair esse valor de x
subtraio o valor de x e concateno o equivalente romano em uma string
Exercício 09: Faça uma função que converta um número romano para decimal. Para isso construa um dicionário com o inverso do que foi feito no ex. anterior. O algoritmo fica assim:
Para i de 0 até o tamanho da string do número romano
cria a string formada pela letra i e letra i+1 caso i seja menor que o tamanho da string - 1
cria a string formada pela letra i-1 e i, caso i seja maior que 0
se a primeira string estiver no dicionário, some o valor em x
senão se a segunda string NÃO estiver no dicionário, some o valor da letra i em x
End of explanation |
15,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load yesterday's data
Step1: Add class data
Continue scraping the web to add primary role
Step2: Visualizing high-dimensional data
Step3: t-distributed Stochastic Neighbor Embedding (TSNE)
t-SNE is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results.
More information provdied by Laurens van der Maaten (https
Step4: Principal component analysis (PCA)
PCA, another dimensionality reduction algorithm.
Principal component analysis (PCA) is a technique used to emphasize variation and bring out strong patterns in a dataset. It's often used to make data easy to explore and visualize (Victor Powell) | Python Code:
# Load data
dat = pd.read_csv("lol_base_stats.tsv", sep="\t")
dat.head()
Explanation: Load yesterday's data
End of explanation
from bs4 import BeautifulSoup
import requests
primary_role = []
for url in dat.href:
html_data = requests.get(url).text
soup = BeautifulSoup(html_data, "html5lib")
role = soup.find('div', attrs={'class' : 'champion_info'}).table.a.text
primary_role.append(role)
dat["primary_role"] = primary_role
dat.head()
# Save data
dat.to_csv("lol_base_stats-roles.tsv", index=False, sep="\t")
# Define colors
my_colors = ['b', 'r', 'm', 'g', 'k', 'y']
my_colors_key = {
'Controller' : 'b',
'Fighter' : 'r',
'Mage' : 'm',
'Marksman' : 'g',
'Slayer' : 'k',
'Tank' : 'y'
}
plt.rcParams["figure.figsize"] = [10,4]
# How many champions of each type?
dat.groupby(["primary_role"]).count()["Champions"].plot.bar(color=my_colors)
plt.ylabel("count")
plt.xlabel("Primary role (according to wikia's Base champion stats)")
Explanation: Add class data
Continue scraping the web to add primary role
End of explanation
# Use only complete cases
datc = pd.DataFrame.dropna(dat)
datc = datc.iloc[:, 1:-2]
Explanation: Visualizing high-dimensional data
End of explanation
# Plot t-SNE at different perplexities
plt.rcParams["figure.figsize"] = [15,15]
nrows = 4
ncols = 4
fig, ax = plt.subplots(nrows, ncols)
perplexity = list(range(50, 4, -3))
for i in range(nrows):
for j in range(ncols):
p = perplexity.pop()
# Run TSNE
model = TSNE(n_components=2, perplexity=p, random_state=0)
X = model.fit_transform(datc)
xlabel = "TNSE1"
ylabel = "TNSE2"
for k in my_colors_key.keys():
X_subset = X[dat.dropna()["primary_role"] == k,]
x = X_subset[:,0]
y = X_subset[:,1]
ax[i,j].scatter(x, y, color=my_colors_key[k])
ax[i,j].title.set_text("perplexity = {}".format(p))
ax[i,j].set(xlabel=xlabel, ylabel=ylabel)
Explanation: t-distributed Stochastic Neighbor Embedding (TSNE)
t-SNE is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results.
More information provdied by Laurens van der Maaten (https://lvdmaaten.github.io/tsne/)
End of explanation
plt.rcParams["figure.figsize"] = [6,6]
fig, ax = plt.subplots(1, 1)
# Run PCA
pca = PCA(n_components=2)
pca.fit(datc)
X = pca.transform(datc)
xlabel = "PC1"
ylabel = "PC2"
for k in my_colors_key.keys():
X_subset = X[dat.dropna()["primary_role"] == k,]
x = X_subset[:,0]
y = X_subset[:,1]
ax.scatter(x, y, color=my_colors_key[k])
ax.set(xlabel=xlabel, ylabel=ylabel)
Explanation: Principal component analysis (PCA)
PCA, another dimensionality reduction algorithm.
Principal component analysis (PCA) is a technique used to emphasize variation and bring out strong patterns in a dataset. It's often used to make data easy to explore and visualize (Victor Powell)
End of explanation |
15,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step6: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
Step8: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
Step9: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step10: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Step11: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment
Step12: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
scores = net.loss(X)
print('Your scores:')
print(scores)
print()
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print()
# The difference should be very small. We get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
loss, _ = net.loss(X, y, reg=0.05)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.05)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=5e-6,
num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=100,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.tight_layout()
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.tight_layout()
plt.title('Classification accuracy history')
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
best_net = None # store the best model into this
best_acc = -1
best_stats = None
from cs231n.pca import PCA
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
hidden_sizes = [150]
learning_rates = [1e-3]
batch_sizes = [200]#[150]
regularizations = [0.25]#[0.5]
input_sizes = [32 * 32 * 3]
for in_size in input_sizes:
# X_train_pca = PCA(X_train, in_size)
# X_val_pca = PCA(X_val, in_size)
for h_size in hidden_sizes:
for lr in learning_rates:
for batch in batch_sizes:
for reg in regularizations:
print('>>>>> input_size=%d, hidden_size=%d, lr=%.5f, batch_size=%3d, reg=%.2f'
% (in_size, h_size, lr, batch, reg))
net = TwoLayerNet(in_size, h_size, num_classes)
stats = net.train(X_train_pca, y_train, X_val_pca, y_val,
num_iters=10000, batch_size=batch,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False, dropout=False)
# Predict on the validation set
val_acc = (net.predict(X_val_pca) == y_val).mean()
print('Validation accuracy: ', val_acc)
if val_acc>best_acc:
best_acc = val_acc
best_net = net
best_stats = stats
plt.plot(best_stats['loss_history'])
plt.tight_layout()
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
test_acc = (best_net.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation |
15,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Dirichlet process mixture model is incredibly flexible in terms of the family of parametric component distributions {fθ | fθ∈Θ}{fθ | fθ∈Θ}. We illustrate this flexibility below by using Poisson component distributions to estimate the density of sunspots per year.
Step1: Generating data
Step2: Model specification
Our initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values.
We will use the model
Step3: For the sunspot model, the posterior distribution of αα is concentrated between 0.6 and 1.2, indicating that we should expect more components to contribute non-negligible amounts to the mixture than for the Old Faithful waiting time model.
Step4: Indeed, we see that between ten and fifteen mixture components have appreciable posterior expected weight.
Step5: We now calculate and plot the fitted density estimate.
Step6: Again, we can decompose the posterior expected density into weighted mixture densities. | Python Code:
# pymc3.distributions.DensityDist?
import matplotlib.pyplot as plt
import matplotlib as mpl
from pymc3 import Model, Normal, Slice
from pymc3 import sample
from pymc3 import traceplot
from pymc3.distributions import Interpolated
from theano import as_op
import theano.tensor as tt
import numpy as np
from scipy import stats
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import scipy as sp
import seaborn as sns
from statsmodels.datasets import get_rdataset
from theano import tensor as tt
%matplotlib inline
%load_ext version_information
%version_information pymc3, statsmodels, pandas
Explanation: The Dirichlet process mixture model is incredibly flexible in terms of the family of parametric component distributions {fθ | fθ∈Θ}{fθ | fθ∈Θ}. We illustrate this flexibility below by using Poisson component distributions to estimate the density of sunspots per year.
End of explanation
sunspot_df = get_rdataset('sunspot.year', cache=True).data
sunspot_df.head()
sunspot_df.plot(x='time')
Explanation: Generating data
End of explanation
SEED = 8675309 # from random.org
np.random.seed(SEED)
K = 50
N = sunspot_df.shape[0]
def stick_breaking(beta):
portion_remaining = tt.concatenate([[1], tt.extra_ops.cumprod(1 - beta)[:-1]])
return beta * portion_remaining
with pm.Model() as model:
alpha = pm.Gamma('alpha', 1., 1.)
beta = pm.Beta('beta', 1, alpha, shape=K)
w = pm.Deterministic('w', stick_breaking(beta))
mu = pm.Uniform('mu', 0., 300., shape=K)
obs = pm.Mixture('obs', w, pm.Poisson.dist(mu), observed=sunspot_df['sunspot.year'])
with model:
step = pm.Metropolis()
trace = pm.sample(10000, step=step, tune=90000, random_seed=SEED, njobs=6)
Explanation: Model specification
Our initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values.
We will use the model:
$\alpha \sim Gamma(1,1) \
\beta_1,...,\beta_k \sim Beat(1,\alpha) \
w_i = \beta_i \prod^i_{j=i-1}(1-\beta_j \
\lambda_i,...,\lambda_k \sim U(0,300) \
x | w_i,\lambda_i \sim \sum^K_{i=1}w_i Poisson(\lambda_i)$
End of explanation
pm.traceplot(trace, varnames=['alpha']);
Explanation: For the sunspot model, the posterior distribution of αα is concentrated between 0.6 and 1.2, indicating that we should expect more components to contribute non-negligible amounts to the mixture than for the Old Faithful waiting time model.
End of explanation
fig, ax = plt.subplots(figsize=(8, 6))
plot_w = np.arange(K) + 1
ax.bar(plot_w - 0.5, trace['w'].mean(axis=0), width=1., lw=0);
ax.set_xlim(0.5, K);
ax.set_xlabel('Component');
ax.set_ylabel('Posterior expected mixture weight');
Explanation: Indeed, we see that between ten and fifteen mixture components have appreciable posterior expected weight.
End of explanation
x_plot = np.arange(250)
post_pmf_contribs = sp.stats.poisson.pmf(np.atleast_3d(x_plot),
trace['mu'][:, np.newaxis, :])
post_pmfs = (trace['w'][:, np.newaxis, :] * post_pmf_contribs).sum(axis=-1)
post_pmf_low, post_pmf_high = np.percentile(post_pmfs, [2.5, 97.5], axis=0)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(sunspot_df['sunspot.year'].values, bins=40, normed=True, lw=0, alpha=0.75);
ax.fill_between(x_plot, post_pmf_low, post_pmf_high,
color='gray', alpha=0.45)
ax.plot(x_plot, post_pmfs[0],
c='gray', label='Posterior sample densities');
ax.plot(x_plot, post_pmfs[::200].T, c='gray');
ax.plot(x_plot, post_pmfs.mean(axis=0),
c='k', label='Posterior expected density');
ax.set_xlabel('Yearly sunspot count');
ax.set_yticklabels([]);
ax.legend(loc=1);
Explanation: We now calculate and plot the fitted density estimate.
End of explanation
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(sunspot_df['sunspot.year'].values, bins=40, normed=True, lw=0, alpha=0.75);
ax.plot(x_plot, post_pmfs.mean(axis=0),
c='k', label='Posterior expected density');
ax.plot(x_plot, (trace['w'][:, np.newaxis, :] * post_pmf_contribs).mean(axis=0)[:, 0],
'--', c='k', label='Posterior expected\nmixture components\n(weighted)');
ax.plot(x_plot, (trace['w'][:, np.newaxis, :] * post_pmf_contribs).mean(axis=0),
'--', c='k');
ax.set_xlabel('Yearly sunspot count');
ax.set_yticklabels([]);
ax.legend(loc=1);
Explanation: Again, we can decompose the posterior expected density into weighted mixture densities.
End of explanation |
15,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
Step1: NOTE
Step2: Subplots
You can plot different things in the same figure using the subplot function. Here is an example
Step3: Reading csv file and plotting the data.
Step4: Simple plot
Step5: Plotting with default settings
Matplotlib comes with a set of default settings that allow customizing all kinds of properties. You can control the defaults of almost every property in matplotlib
Step6: Instantiating defaults
Step7: We can also change the attributes of the graph
Step8: Setting limits
Current limits of the figure are a bit too tight and we want to make some space in order to clearly see all data points
Step9: Setting ticks
Current ticks are not ideal because they do not show the interesting values (+/-π,+/-π/2) for sine and cosine. We’ll change them such that they show only these values.
Step10: Adding a legend
Step11: Figures, Subplots, Axes and Ticks
Figures
A “figure” in matplotlib means the whole window in the user interface. Within this figure there can be “subplots”.
A figure is the windows in the GUI that has “Figure #” as title. Figures are numbered starting from 1 as opposed to the normal Python way starting from 0. This is clearly MATLAB-style. There are several parameters that determine what the figure looks like
Step12: Subplots
With subplot you can arrange plots in a regular grid. You need to specify the number of rows and columns and the number of the plot. Note that the gridspec command is a more powerful alternative.
Step13: Set up a subplot grid that has height 2 and width 1,
and set the first such subplot as active.
plt.subplot(1, 2, 1)
Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
Set the second subplot as active, and make the second plot.
plt.subplot(1, 2, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
Show the figure.
plt.show()
Step14: Axes
Axes are very similar to subplots but allow placement of plots at any location in the figure. So if we want to put a smaller plot inside a bigger one we do so with axes.
Ticks
Well formatted ticks are an important part of publishing-ready figures. Matplotlib provides a totally configurable system for ticks. There are tick locators to specify where ticks should appear and tick formatters to give ticks the appearance you want. Major and minor ticks can be located and formatted independently from each other. Per default minor ticks are not shown, i.e. there is only an empty list for them because it is as NullLocator
Tick Locators
Tick locators control the positions of the ticks. They are set as follows
Step15: Plots with fill
Step16: Scatter Plots
Step17: Bar Plots
Step18: Contour Plots | Python Code:
import numpy as np
import matplotlib.pyplot as plt
##################
%matplotlib inline
Explanation: Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
End of explanation
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
plt.show()
a = np.array([1, 4, 5, 66, 77, 334], int)
plt.plot(a)
plt.show()
Lets add more details to the graphs.
y_cos = np.cos(x)
y_sin = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine Graph')
plt.legend(['Sine', 'Cosine'])
plt.show()
Explanation: NOTE:
Use the below code for
IPython console: %matplotlib
Jupyter notebook: %matplotlib inline
Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:
End of explanation
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
Explanation: Subplots
You can plot different things in the same figure using the subplot function. Here is an example:
End of explanation
# # import numpy as np
# data = np.genfromtxt('Metadata_Indicator_API_IND_DS2_en_csv_v2.csv', delimiter=',',
# names=['INDICATOR_CODE', 'INDICATOR_NAME', 'SOURCE_NOTE', 'SOURCE_ORGANIZATION'])
# plt.plot(data['INDICATOR_CODE'], data['INDICATOR_NAME'], color='r', label='the data')
# plt.show()
Explanation: Reading csv file and plotting the data.
End of explanation
import numpy as np
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
C, S = np.cos(X), np.sin(X)
print(C)
print(S)
Explanation: Simple plot
End of explanation
import numpy as np
import matplotlib.pyplot as plt
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
C, S = np.cos(X), np.sin(X)
plt.plot(X, C)
plt.plot(X, S)
plt.show()
Explanation: Plotting with default settings
Matplotlib comes with a set of default settings that allow customizing all kinds of properties. You can control the defaults of almost every property in matplotlib: figure size and dpi, line width, color and style, axes, axis and grid properties, text and font properties and so on.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
# Create a figure of size 8x6 inches, 80 dots per inch
plt.figure(figsize=(8, 6), dpi=80)
# Create a new subplot from a grid of 1x1
plt.subplot(1, 1, 1)
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
C, S = np.cos(X), np.sin(X)
# Plot cosine with a blue continuous line of width 1 (pixels)
plt.plot(X, C, color="blue", linewidth=1.0, linestyle="-")
# Plot sine with a green continuous line of width 1 (pixels)
plt.plot(X, S, color="green", linewidth=1.0, linestyle="-")
# Set x limits
plt.xlim(-4.0, 4.0)
# Set x ticks
plt.xticks(np.linspace(-4, 4, 9, endpoint=True))
# Set y limits
plt.ylim(-1.0, 1.0)
# Set y ticks
plt.yticks(np.linspace(-1, 1, 5, endpoint=True))
# Save figure using 72 dots per inch
# plt.savefig("exercise_2.png", dpi=72)
# Show result on screen
plt.show()
Explanation: Instantiating defaults
End of explanation
plt.figure(figsize=(10, 6), dpi=80)
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-.")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
Explanation: We can also change the attributes of the graph
End of explanation
plt.xlim(X.min() * 1.1, X.max() * 1.1)
plt.ylim(C.min() * 1.1, C.max() * 1.1)
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
Explanation: Setting limits
Current limits of the figure are a bit too tight and we want to make some space in order to clearly see all data points
End of explanation
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi])
plt.yticks([-1, 0, +1])
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
t = 2 * np.pi / 3
plt.plot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle="--")
plt.scatter([t, ], [np.cos(t), ], 50, color='blue')
plt.annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$',
xy=(t, np.cos(t)), xycoords='data',
xytext=(-90, -50), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.plot([t, t],[0, np.sin(t)], color='red', linewidth=2.5, linestyle="--")
plt.scatter([t, ],[np.sin(t), ], 50, color='red')
plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$',
xy=(t, np.sin(t)), xycoords='data',
xytext=(+10, +30), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
ax = plt.gca() # gca stands for 'get current axis'
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-.")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
Explanation: Setting ticks
Current ticks are not ideal because they do not show the interesting values (+/-π,+/-π/2) for sine and cosine. We’ll change them such that they show only these values.
End of explanation
ax = plt.gca() # gca stands for 'get current axis'
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-.", label="cosine")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
plt.legend(loc='upper left')
Explanation: Adding a legend
End of explanation
plt.close(1)
Explanation: Figures, Subplots, Axes and Ticks
Figures
A “figure” in matplotlib means the whole window in the user interface. Within this figure there can be “subplots”.
A figure is the windows in the GUI that has “Figure #” as title. Figures are numbered starting from 1 as opposed to the normal Python way starting from 0. This is clearly MATLAB-style. There are several parameters that determine what the figure looks like:
| Argument | Default | Description |
|----------- |------------------ |--------------------------------------------- |
| num | 1 | number of figure |
| figsize | figure.figsize | figure size in inches (width, height) |
| dpi | figure.dpi | resolution in dots per inch |
| facecolor | figure.facecolor | color of the drawing background |
| edgecolor | figure.edgecolor | color of edge around the drawing background |
| frameon | True | draw figure frame or not |
The defaults can be specified in the resource file and will be used most of the time. Only the number of the figure is frequently changed.
As with other objects, you can set figure properties also setp or with the set_something methods.
When you work with the GUI you can close a figure by clicking on the x in the upper right corner. But you can close a figure programmatically by calling close. Depending on the argument it closes (1) the current figure (no argument), (2) a specific figure (figure number or figure instance as argument), or (3) all figures ("all" as argument).
End of explanation
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
Explanation: Subplots
With subplot you can arrange plots in a regular grid. You need to specify the number of rows and columns and the number of the plot. Note that the gridspec command is a more powerful alternative.
End of explanation
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 2, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 2, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
a = np.linspace(-1, 1, num=100)
y_arcsin = np.arcsin(a)
y_arccos = np.arccos(a)
plt.subplot(2, 2, 3)
plt.plot(a, y_arcsin)
plt.title('arcsin')
plt.subplot(2, 2, 4)
plt.plot(a, y_arccos)
plt.title('arccos')
# Show the figure.
plt.show()
Explanation: Set up a subplot grid that has height 2 and width 1,
and set the first such subplot as active.
plt.subplot(1, 2, 1)
Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
Set the second subplot as active, and make the second plot.
plt.subplot(1, 2, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
Show the figure.
plt.show()
End of explanation
n = 256
X = np.linspace(-np.pi, np.pi, n, endpoint=True)
Y = np.sin(2 * X)
plt.plot(X, Y + 1, color='blue', alpha=1.00)
plt.plot(X, Y - 1, color='blue', alpha=1.00)
Explanation: Axes
Axes are very similar to subplots but allow placement of plots at any location in the figure. So if we want to put a smaller plot inside a bigger one we do so with axes.
Ticks
Well formatted ticks are an important part of publishing-ready figures. Matplotlib provides a totally configurable system for ticks. There are tick locators to specify where ticks should appear and tick formatters to give ticks the appearance you want. Major and minor ticks can be located and formatted independently from each other. Per default minor ticks are not shown, i.e. there is only an empty list for them because it is as NullLocator
Tick Locators
Tick locators control the positions of the ticks. They are set as follows:
ax = plt.gca()
ax.xaxis.set_major_locator(eval(locator))
There are several locators for different kind of requirements:
NullLocator()
MultipleLocator()
FixedLocator()
IndexLocator()
LinearLocator()
LogLocator()
AutoLocator()
All of these locators derive from the base class matplotlib.ticker.Locator. You can make your own locator deriving from it. Handling dates as ticks can be especially tricky. Therefore, matplotlib provides special locators in matplotlib.dates.
Plots
End of explanation
x = np.arange(0.0, 2, 0.01)
y1 = np.sin(2*np.pi*x)
y2 = 1.2*np.sin(4*np.pi*x)
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True)
ax1.fill_between(x, 0, y1)
ax1.set_ylabel('between y1 and 0')
ax2.fill_between(x, y1, 1)
ax2.set_ylabel('between y1 and 1')
ax3.fill_between(x, y1, y2)
ax3.set_ylabel('between y1 and y2')
ax3.set_xlabel('x')
plt.show()
Explanation: Plots with fill
End of explanation
n = 1024
X = np.random.normal(0,1,n)
Y = np.random.normal(0,1,n)
plt.scatter(X,Y)
# with colors
import numpy as np
import matplotlib.pyplot as plt
n = 1024
X = np.random.normal(0, 1, n)
Y = np.random.normal(0, 1, n)
T = np.arctan2(Y, X)
print(t)
plt.axes([0.025, 0.025, 0.95, 0.95])
plt.scatter(X, Y, s=75, c=T, alpha=.5)
plt.xlim(-1.5, 1.5)
plt.xticks(())
plt.ylim(-1.5, 1.5)
plt.yticks(())
plt.show()
Explanation: Scatter Plots
End of explanation
import numpy as np
import matplotlib.pyplot as plt
n = 12
X = np.arange(n)
Y1 = (1 - X / float(n)) * np.random.uniform(0.5, 0.7, n)
Y2 = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
plt.axes([0.025, 0.025, 0.95, 0.95])
plt.bar(X, +Y1, facecolor='#9999ff', edgecolor='white')
plt.bar(X, -Y2, facecolor='#ff9999', edgecolor='white')
for x, y in zip(X, Y1):
plt.text(x + 0.4, y + 0.05, '%.2f' % y, ha='center', va= 'bottom')
for x, y in zip(X, Y2):
plt.text(x + 0.4, -y - 0.05, '%.2f' % y, ha='center', va= 'top')
plt.xlim(-.5, n)
plt.xticks(())
plt.ylim(-1.25, 1.25)
plt.yticks(())
plt.show()
Explanation: Bar Plots
End of explanation
import numpy as np
import matplotlib.pyplot as plt
def f(x,y):
return (1 - x / 2 + x**5 + y**3) * np.exp(-x**2 -y**2)
n = 256
x = np.linspace(-3, 3, n)
y = np.linspace(-3, 3, n)
X,Y = np.meshgrid(x, y)
plt.axes([0.025, 0.025, 0.95, 0.95])
plt.contourf(X, Y, f(X, Y), 8, alpha=.75, cmap=plt.cm.hot)
C = plt.contour(X, Y, f(X, Y), 8, colors='black', linewidth=.5)
plt.clabel(C, inline=1, fontsize=10)
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Contour Plots
End of explanation |
15,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Efficient serving
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: We also need to install scann
Step3: Set up all the necessary imports.
Step4: And load the data
Step5: Before we can build a model, we need to set up the user and movie vocabularies
Step6: We'll also set up the training and test sets
Step7: Model definition
Just as in the basic retrieval tutorial, we build a simple two-tower model.
Step8: Fitting and evaluation
A TFRS model is just a Keras model. We can compile it
Step9: Estimate it
Step10: And evaluate it.
Step11: Approximate prediction
The most straightforward way of retrieving top candidates in response to a query is to do it via brute force
Step12: Once created and populated with candidates (via the index method), we can call it to get predictions out
Step13: On a small dataset of under 1000 movies, this is very fast
Step14: But what happens if we have more candidates - millions instead of thousands?
We can simulate this by indexing all of our movies multiple times
Step15: We can build a BruteForce index on this larger dataset
Step16: The recommendations are still the same
Step17: But they take much longer. With a candidate set of 1 million movies, brute force prediction becomes quite slow
Step18: As the number of candidate grows, the amount of time needed grows linearly
Step19: The recommendations are (approximately!) the same
Step20: But they are much, much faster to compute
Step21: In this case, we can retrieve the top 3 movies out of a set of ~1 million in around 2 milliseconds
Step22: We can do the same using ScaNN
Step23: ScaNN based evaluation is much, much quicker
Step24: This suggests that on this artificial datase, there is little loss from the approximation. In general, all approximate methods exhibit speed-accuracy tradeoffs. To understand this in more depth you can check out Erik Bernhardsson's ANN benchmarks.
Deploying the approximate model
The ScaNN-based model is fully integrated into TensorFlow models, and serving it is as easy as serving any other TensorFlow model.
We can save it as a SavedModel object
Step25: and then load it and serve, getting exactly the same results back
Step26: The resulting model can be served in any Python service that has TensorFlow and ScaNN installed.
It can also be served using a customized version of TensorFlow Serving, available as a Docker container on Docker Hub. You can also build the image yourself from the Dockerfile.
Tuning ScaNN
Now let's look into tuning our ScaNN layer to get a better performance/accuracy tradeoff. In order to do this effectively, we first need to measure our baseline performance and accuracy.
From above, we already have a measurement of our model's latency for processing a single (non-batched) query (although note that a fair amount of this latency is from non-ScaNN components of the model).
Now we need to investigate ScaNN's accuracy, which we measure through recall. A recall@k of x% means that if we use brute force to retrieve the true top k neighbors, and compare those results to using ScaNN to also retrieve the top k neighbors, x% of ScaNN's results are in the true brute force results. Let's compute the recall for the current ScaNN searcher.
First, we need to generate the brute force, ground truth top-k
Step27: Our variable titles_ground_truth now contains the top-10 movie recommendations returned by brute-force retrieval. Now we can compute the same recommendations when using ScaNN
Step28: Next, we define our function that computes recall. For each query, it counts how many results are in the intersection of the brute force and the ScaNN results and divides this by the number of brute force results. The average of this quantity over all queries is our recall.
Step29: This gives us baseline recall@10 with the current ScaNN config
Step30: We can also measure the baseline latency
Step31: Let's see if we can do better!
To do this, we need a model of how ScaNN's tuning knobs affect performance. Our current model uses ScaNN's tree-AH algorithm. This algorithm partitions the database of embeddings (the "tree") and then scores the most promising of these partitions using AH, which is a highly optimized approximate distance computation routine.
The default parameters for TensorFlow Recommenders' ScaNN Keras layer sets num_leaves=100 and num_leaves_to_search=10. This means our database is partitioned into 100 disjoint subsets, and the 10 most promising of these partitions is scored with AH. This means 10/100=10% of the dataset is being searched with AH.
If we have, say, num_leaves=1000 and num_leaves_to_search=100, we would also be searching 10% of the database with AH. However, in comparison to the previous setting, the 10% we would search will contain higher-quality candidates, because a higher num_leaves allows us to make finer-grained decisions about what parts of the dataset are worth searching.
It's no surprise then that with num_leaves=1000 and num_leaves_to_search=100 we get significantly higher recall
Step32: However, as a tradeoff, our latency has also increased. This is because the partitioning step has gotten more expensive; scann picks the top 10 of 100 partitions while scann2 picks the top 100 of 1000 partitions. The latter can be more expensive because it involves looking at 10 times as many partitions.
Step33: In general, tuning ScaNN search is about picking the right tradeoffs. Each individual parameter change generally won't make search both faster and more accurate; our goal is to tune the parameters to optimally trade off between these two conflicting goals.
In our case, scann2 significantly improved recall over scann at some cost in latency. Can we dial back some other knobs to cut down on latency, while preserving most of our recall advantage?
Let's try searching 70/1000=7% of the dataset with AH, and only rescoring the final 400 candidates
Step34: scann3 delivers about a 3% absolute recall gain over scann while also delivering lower latency | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
Explanation: Efficient serving
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/examples/efficient_serving"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/efficient_serving.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/efficient_serving.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/efficient_serving.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Retrieval models are often built to surface a handful of top candidates out of millions or even hundreds of millions of candidates. To be able to react to the user's context and behaviour, they need to be able to do this on the fly, in a matter of milliseconds.
Approximate nearest neighbour search (ANN) is the technology that makes this possible. In this tutorial, we'll show how to use ScaNN - a state of the art nearest neighbour retrieval package - to seamlessly scale TFRS retrieval to millions of items.
What is ScaNN?
ScaNN is a library from Google Research that performs dense vector similarity search at large scale. Given a database of candidate embeddings, ScaNN indexes these embeddings in a manner that allows them to be rapidly searched at inference time. ScaNN uses state of the art vector compression techniques and carefully implemented algorithms to achieve the best speed-accuracy tradeoff. It can greatly outperform brute force search while sacrificing little in terms of accuracy.
Building a ScaNN-powered model
To try out ScaNN in TFRS, we'll build a simple MovieLens retrieval model, just as we did in the basic retrieval tutorial. If you have followed that tutorial, this section will be familiar and can safely be skipped.
To start, install TFRS and TensorFlow Datasets:
End of explanation
!pip install -q scann
Explanation: We also need to install scann: it's an optional dependency of TFRS, and so needs to be installed separately.
End of explanation
from typing import Dict, Text
import os
import pprint
import tempfile
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
Explanation: Set up all the necessary imports.
End of explanation
# Load the MovieLens 100K data.
ratings = tfds.load(
"movielens/100k-ratings",
split="train"
)
# Get the ratings data.
ratings = (ratings
# Retain only the fields we need.
.map(lambda x: {"user_id": x["user_id"], "movie_title": x["movie_title"]})
# Cache for efficiency.
.cache(tempfile.NamedTemporaryFile().name)
)
# Get the movies data.
movies = tfds.load("movielens/100k-movies", split="train")
movies = (movies
# Retain only the fields we need.
.map(lambda x: x["movie_title"])
# Cache for efficiency.
.cache(tempfile.NamedTemporaryFile().name))
Explanation: And load the data:
End of explanation
user_ids = ratings.map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(user_ids.batch(1000))))
Explanation: Before we can build a model, we need to set up the user and movie vocabularies:
End of explanation
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
Explanation: We'll also set up the training and test sets:
End of explanation
class MovielensModel(tfrs.Model):
def __init__(self):
super().__init__()
embedding_dimension = 32
# Set up a model for representing movies.
self.movie_model = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
# We add an additional embedding to account for unknown tokens.
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
# Set up a model for representing users.
self.user_model = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
# We add an additional embedding to account for unknown tokens.
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
# Set up a task to optimize the model and compute metrics.
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).cache().map(self.movie_model)
)
)
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# We pick out the user features and pass them into the user model.
user_embeddings = self.user_model(features["user_id"])
# And pick out the movie features and pass them into the movie model,
# getting embeddings back.
positive_movie_embeddings = self.movie_model(features["movie_title"])
# The task computes the loss and the metrics.
return self.task(user_embeddings, positive_movie_embeddings, compute_metrics=not training)
Explanation: Model definition
Just as in the basic retrieval tutorial, we build a simple two-tower model.
End of explanation
model = MovielensModel()
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
Explanation: Fitting and evaluation
A TFRS model is just a Keras model. We can compile it:
End of explanation
model.fit(train.batch(8192), epochs=3)
Explanation: Estimate it:
End of explanation
model.evaluate(test.batch(8192), return_dict=True)
Explanation: And evaluate it.
End of explanation
brute_force = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
brute_force.index_from_dataset(
movies.batch(128).map(lambda title: (title, model.movie_model(title)))
)
Explanation: Approximate prediction
The most straightforward way of retrieving top candidates in response to a query is to do it via brute force: compute user-movie scores for all possible movies, sort them, and pick a couple of top recommendations.
In TFRS, this is accomplished via the BruteForce layer:
End of explanation
# Get predictions for user 42.
_, titles = brute_force(np.array(["42"]), k=3)
print(f"Top recommendations: {titles[0]}")
Explanation: Once created and populated with candidates (via the index method), we can call it to get predictions out:
End of explanation
%timeit _, titles = brute_force(np.array(["42"]), k=3)
Explanation: On a small dataset of under 1000 movies, this is very fast:
End of explanation
# Construct a dataset of movies that's 1,000 times larger. We
# do this by adding several million dummy movie titles to the dataset.
lots_of_movies = tf.data.Dataset.concatenate(
movies.batch(4096),
movies.batch(4096).repeat(1_000).map(lambda x: tf.zeros_like(x))
)
# We also add lots of dummy embeddings by randomly perturbing
# the estimated embeddings for real movies.
lots_of_movies_embeddings = tf.data.Dataset.concatenate(
movies.batch(4096).map(model.movie_model),
movies.batch(4096).repeat(1_000)
.map(lambda x: model.movie_model(x))
.map(lambda x: x * tf.random.uniform(tf.shape(x)))
)
Explanation: But what happens if we have more candidates - millions instead of thousands?
We can simulate this by indexing all of our movies multiple times:
End of explanation
brute_force_lots = tfrs.layers.factorized_top_k.BruteForce()
brute_force_lots.index_from_dataset(
tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))
)
Explanation: We can build a BruteForce index on this larger dataset:
End of explanation
_, titles = brute_force_lots(model.user_model(np.array(["42"])), k=3)
print(f"Top recommendations: {titles[0]}")
Explanation: The recommendations are still the same
End of explanation
%timeit _, titles = brute_force_lots(model.user_model(np.array(["42"])), k=3)
Explanation: But they take much longer. With a candidate set of 1 million movies, brute force prediction becomes quite slow:
End of explanation
scann = tfrs.layers.factorized_top_k.ScaNN(num_reordering_candidates=100)
scann.index_from_dataset(
tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))
)
Explanation: As the number of candidate grows, the amount of time needed grows linearly: with 10 million candidates, serving top candidates would take 250 milliseconds. This is clearly too slow for a live service.
This is where approximate mechanisms come in.
Using ScaNN in TFRS is accomplished via the tfrs.layers.factorized_top_k.ScaNN layer. It follow the same interface as the other top k layers:
End of explanation
_, titles = scann(model.user_model(np.array(["42"])), k=3)
print(f"Top recommendations: {titles[0]}")
Explanation: The recommendations are (approximately!) the same
End of explanation
%timeit _, titles = scann(model.user_model(np.array(["42"])), k=3)
Explanation: But they are much, much faster to compute:
End of explanation
# Override the existing streaming candidate source.
model.task.factorized_metrics = tfrs.metrics.FactorizedTopK(
candidates=lots_of_movies_embeddings
)
# Need to recompile the model for the changes to take effect.
model.compile()
%time baseline_result = model.evaluate(test.batch(8192), return_dict=True, verbose=False)
Explanation: In this case, we can retrieve the top 3 movies out of a set of ~1 million in around 2 milliseconds: 15 times faster than by computing the best candidates via brute force. The advantage of approximate methods grows even larger for larger datasets.
Evaluating the approximation
When using approximate top K retrieval mechanisms (such as ScaNN), speed of retrieval often comes at the expense of accuracy. To understand this trade-off, it's important to measure the model's evaluation metrics when using ScaNN, and to compare them with the baseline.
Fortunately, TFRS makes this easy. We simply override the metrics on the retrieval task with metrics using ScaNN, re-compile the model, and run evaluation.
To make the comparison, let's first run baseline results. We still need to override our metrics to make sure they are using the enlarged candidate set rather than the original set of movies:
End of explanation
model.task.factorized_metrics = tfrs.metrics.FactorizedTopK(
candidates=scann
)
model.compile()
# We can use a much bigger batch size here because ScaNN evaluation
# is more memory efficient.
%time scann_result = model.evaluate(test.batch(8192), return_dict=True, verbose=False)
Explanation: We can do the same using ScaNN:
End of explanation
print(f"Brute force top-100 accuracy: {baseline_result['factorized_top_k/top_100_categorical_accuracy']:.2f}")
print(f"ScaNN top-100 accuracy: {scann_result['factorized_top_k/top_100_categorical_accuracy']:.2f}")
Explanation: ScaNN based evaluation is much, much quicker: it's over ten times faster! This advantage is going to grow even larger for bigger datasets, and so for large datasets it may be prudent to always run ScaNN-based evaluation to improve model development velocity.
But how about the results? Fortunately, in this case the results are almost the same:
End of explanation
lots_of_movies_embeddings
# We re-index the ScaNN layer to include the user embeddings in the same model.
# This way we can give the saved model raw features and get valid predictions
# back.
scann = tfrs.layers.factorized_top_k.ScaNN(model.user_model, num_reordering_candidates=1000)
scann.index_from_dataset(
tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))
)
# Need to call it to set the shapes.
_ = scann(np.array(["42"]))
with tempfile.TemporaryDirectory() as tmp:
path = os.path.join(tmp, "model")
tf.saved_model.save(
scann,
path,
options=tf.saved_model.SaveOptions(namespace_whitelist=["Scann"])
)
loaded = tf.saved_model.load(path)
Explanation: This suggests that on this artificial datase, there is little loss from the approximation. In general, all approximate methods exhibit speed-accuracy tradeoffs. To understand this in more depth you can check out Erik Bernhardsson's ANN benchmarks.
Deploying the approximate model
The ScaNN-based model is fully integrated into TensorFlow models, and serving it is as easy as serving any other TensorFlow model.
We can save it as a SavedModel object
End of explanation
_, titles = loaded(tf.constant(["42"]))
print(f"Top recommendations: {titles[0][:3]}")
Explanation: and then load it and serve, getting exactly the same results back:
End of explanation
# Process queries in groups of 1000; processing them all at once with brute force
# may lead to out-of-memory errors, because processing a batch of q queries against
# a size-n dataset takes O(nq) space with brute force.
titles_ground_truth = tf.concat([
brute_force_lots(queries, k=10)[1] for queries in
test.batch(1000).map(lambda x: model.user_model(x["user_id"]))
], axis=0)
Explanation: The resulting model can be served in any Python service that has TensorFlow and ScaNN installed.
It can also be served using a customized version of TensorFlow Serving, available as a Docker container on Docker Hub. You can also build the image yourself from the Dockerfile.
Tuning ScaNN
Now let's look into tuning our ScaNN layer to get a better performance/accuracy tradeoff. In order to do this effectively, we first need to measure our baseline performance and accuracy.
From above, we already have a measurement of our model's latency for processing a single (non-batched) query (although note that a fair amount of this latency is from non-ScaNN components of the model).
Now we need to investigate ScaNN's accuracy, which we measure through recall. A recall@k of x% means that if we use brute force to retrieve the true top k neighbors, and compare those results to using ScaNN to also retrieve the top k neighbors, x% of ScaNN's results are in the true brute force results. Let's compute the recall for the current ScaNN searcher.
First, we need to generate the brute force, ground truth top-k:
End of explanation
# Get all user_id's as a 1d tensor of strings
test_flat = np.concatenate(list(test.map(lambda x: x["user_id"]).batch(1000).as_numpy_iterator()), axis=0)
# ScaNN is much more memory efficient and has no problem processing the whole
# batch of 20000 queries at once.
_, titles = scann(test_flat, k=10)
Explanation: Our variable titles_ground_truth now contains the top-10 movie recommendations returned by brute-force retrieval. Now we can compute the same recommendations when using ScaNN:
End of explanation
def compute_recall(ground_truth, approx_results):
return np.mean([
len(np.intersect1d(truth, approx)) / len(truth)
for truth, approx in zip(ground_truth, approx_results)
])
Explanation: Next, we define our function that computes recall. For each query, it counts how many results are in the intersection of the brute force and the ScaNN results and divides this by the number of brute force results. The average of this quantity over all queries is our recall.
End of explanation
print(f"Recall: {compute_recall(titles_ground_truth, titles):.3f}")
Explanation: This gives us baseline recall@10 with the current ScaNN config:
End of explanation
%timeit -n 1000 scann(np.array(["42"]), k=10)
Explanation: We can also measure the baseline latency:
End of explanation
scann2 = tfrs.layers.factorized_top_k.ScaNN(
model.user_model,
num_leaves=1000,
num_leaves_to_search=100,
num_reordering_candidates=1000)
scann2.index_from_dataset(
tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))
)
_, titles2 = scann2(test_flat, k=10)
print(f"Recall: {compute_recall(titles_ground_truth, titles2):.3f}")
Explanation: Let's see if we can do better!
To do this, we need a model of how ScaNN's tuning knobs affect performance. Our current model uses ScaNN's tree-AH algorithm. This algorithm partitions the database of embeddings (the "tree") and then scores the most promising of these partitions using AH, which is a highly optimized approximate distance computation routine.
The default parameters for TensorFlow Recommenders' ScaNN Keras layer sets num_leaves=100 and num_leaves_to_search=10. This means our database is partitioned into 100 disjoint subsets, and the 10 most promising of these partitions is scored with AH. This means 10/100=10% of the dataset is being searched with AH.
If we have, say, num_leaves=1000 and num_leaves_to_search=100, we would also be searching 10% of the database with AH. However, in comparison to the previous setting, the 10% we would search will contain higher-quality candidates, because a higher num_leaves allows us to make finer-grained decisions about what parts of the dataset are worth searching.
It's no surprise then that with num_leaves=1000 and num_leaves_to_search=100 we get significantly higher recall:
End of explanation
%timeit -n 1000 scann2(np.array(["42"]), k=10)
Explanation: However, as a tradeoff, our latency has also increased. This is because the partitioning step has gotten more expensive; scann picks the top 10 of 100 partitions while scann2 picks the top 100 of 1000 partitions. The latter can be more expensive because it involves looking at 10 times as many partitions.
End of explanation
scann3 = tfrs.layers.factorized_top_k.ScaNN(
model.user_model,
num_leaves=1000,
num_leaves_to_search=70,
num_reordering_candidates=400)
scann3.index_from_dataset(
tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))
)
_, titles3 = scann3(test_flat, k=10)
print(f"Recall: {compute_recall(titles_ground_truth, titles3):.3f}")
Explanation: In general, tuning ScaNN search is about picking the right tradeoffs. Each individual parameter change generally won't make search both faster and more accurate; our goal is to tune the parameters to optimally trade off between these two conflicting goals.
In our case, scann2 significantly improved recall over scann at some cost in latency. Can we dial back some other knobs to cut down on latency, while preserving most of our recall advantage?
Let's try searching 70/1000=7% of the dataset with AH, and only rescoring the final 400 candidates:
End of explanation
%timeit -n 1000 scann3(np.array(["42"]), k=10)
Explanation: scann3 delivers about a 3% absolute recall gain over scann while also delivering lower latency:
End of explanation |
15,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now fundamentally the data frame is just an abstraction but it provides a ton of useful tools that you’re going to get to see. This video is just going to go over the basic idea of the data frame as well as how to create them.
Step1: You can create DataFrames by passing in np arrays, lists of series, or dictionaries.
Step2: We’ll be covering a lot of different aspects here but as always we’re going to start with the simple stuff. A simplification of a data frame is like an excel table or sql table. You’ve got columns and rows.
In more specific pandas terms, it's a more powerful list of series. Each column is a Series of data and it just so happens these can have relationships.
You can see that if we just pass in a list of lists it treats them like columns. Of course if that’s an issue we can just transpose it and get we’ll get them as columns.
Step3: This should be familiar because it’s the same way that we transpose ndarrays in numpy.
Of course we can also specify them as explicit columns but passing in a dictionary where the keys are the column names and the values are the lists of each item (or the rows).
Step4: Now you’ll see that if these lengths are not the same, we’ll get a ValueError so it’s worth checking to make sure your data is clean before importing or using it to create a DataFrame
Step5: We can rename the columns easily and even add a new one through a relatively simple dictionary like assignment. I'll go over some more complex methods later on.
Step6: Now just like Series, DataFrames have data types, we can get those by accessing the dtypes of the DataFrame which will give us details on the data types we've got.
Step7: Of course we can sort maybe by a specific column or by the index(the default).
Step8: We've seen how to query for one column and multiple columns isn't too much more difficult.
We can get upper and lower case columns
Step9: We can also just query the index as well. We went over a lot of that in the Series Section and a lot of the same applies here.
We can query by index location or by letters | Python Code:
import string
upcase = [x for x in string.ascii_uppercase]
lcase = [x for x in string.ascii_lowercase]
print(upcase[:5], lcase[:5])
Explanation: Now fundamentally the data frame is just an abstraction but it provides a ton of useful tools that you’re going to get to see. This video is just going to go over the basic idea of the data frame as well as how to create them.
End of explanation
pd.DataFrame([upcase, lcase])
Explanation: You can create DataFrames by passing in np arrays, lists of series, or dictionaries.
End of explanation
pd.DataFrame([upcase, lcase]).T
Explanation: We’ll be covering a lot of different aspects here but as always we’re going to start with the simple stuff. A simplification of a data frame is like an excel table or sql table. You’ve got columns and rows.
In more specific pandas terms, it's a more powerful list of series. Each column is a Series of data and it just so happens these can have relationships.
You can see that if we just pass in a list of lists it treats them like columns. Of course if that’s an issue we can just transpose it and get we’ll get them as columns.
End of explanation
letters = pd.DataFrame({'lowercase':lcase, 'uppercase':upcase})
letters.head()
Explanation: This should be familiar because it’s the same way that we transpose ndarrays in numpy.
Of course we can also specify them as explicit columns but passing in a dictionary where the keys are the column names and the values are the lists of each item (or the rows).
End of explanation
pd.DataFrame({'lowercase':lcase + [0], 'uppercase':upcase})
letters.head()
Explanation: Now you’ll see that if these lengths are not the same, we’ll get a ValueError so it’s worth checking to make sure your data is clean before importing or using it to create a DataFrame
End of explanation
letters.columns = ['LowerCase','UpperCase']
np.random.seed(25)
letters['Number'] = np.random.random_integers(1,50,26)
letters
Explanation: We can rename the columns easily and even add a new one through a relatively simple dictionary like assignment. I'll go over some more complex methods later on.
End of explanation
letters.dtypes
letters.index = lcase
letters
Explanation: Now just like Series, DataFrames have data types, we can get those by accessing the dtypes of the DataFrame which will give us details on the data types we've got.
End of explanation
letters.sort('Number')
letters.sort()
Explanation: Of course we can sort maybe by a specific column or by the index(the default).
End of explanation
letters[['LowerCase','UpperCase']].head()
Explanation: We've seen how to query for one column and multiple columns isn't too much more difficult.
We can get upper and lower case columns
End of explanation
letters.iloc[5:10]
letters["f":"k"]
Explanation: We can also just query the index as well. We went over a lot of that in the Series Section and a lot of the same applies here.
We can query by index location or by letters
End of explanation |
15,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assess Huntington's disease progression from PET/MR images
Step1: Inflammation assesment using PBR scans
Participants information
Step2: We also set some colors for the groups.
Step3: Principal component analysis
We extract all voxels for each subject and region directly from the nifti images and the masks generated from Freesurfer.
We use principal component analysis to transform each subject into the vector space spanned by the eigenvectors of the covariance matrix. The plots show the components along each axis in this subspace for each patient. The axis are ordered according to the value of the eigenvectors.
We choose a few statistical quantities to describe each region
Step4: Extract the voxel data, calculate statistical features for all regions and subjects, and put them into a pd.DataFrame.
Step5: Calculate the principal component rotated basis
Step6: Let's order the data according to the score in the plot above.
Step7: Just a couple of features classify inflammation in patients vs controls. Even only one feature, classifies the subjects in at least two groups
Step8: Region of interest analysis
Step9: We can assess the difference between the distributions above using a permutation test. We compare the distributions for the control and HD for each region using the mean and the median of each subject as input.
Step10: Correlation between caudate volumes and inflammation
The volume of each region as it has a lot of variability, even among subjects with similar disease progression. We normalize the volumes and show the correlations between the value of the classifier for inflammation
Step11: Calculate the results of the regression for the other regions
Step12: Thalamic nuclei analysis
Get the images in MNI space, instead of subject space, so we can use the atlas, and then apply all the masks in the atlas to each of the subject's MNI images. The result is a dictionary masked_imgs with the masked images for each nuclei and patient.
As the license of the Morel atlas does not allow redistribution, you need to get a licensed copy of the Morel atlas before using the code below to generate the plots yourself.
Step13: Plot the histogram of uptake for each relevant nuclei and subject. For the remaining nuclei in the Morel Atlas differences between controls and patients are not as marked as for those selected. The vertical lines are a guide ot the eye and mark the value of SUVR corresponding to the 95 percentile of all voxels in the control group. | Python Code:
import itertools
import glob
import os
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import gridspec
import nibabel as nib
import numpy as np
import pandas as pd
import seaborn as sns
import hd_classifier
Explanation: Assess Huntington's disease progression from PET/MR images
End of explanation
participants_df = pd.read_csv('data/participants.tsv', delimiter='\t')
participants_df
Explanation: Inflammation assesment using PBR scans
Participants information
End of explanation
subject_colors = {'HD': 'r', 'pre-HD': 'g', 'control': 'b'}
Explanation: We also set some colors for the groups.
End of explanation
q_1 = lambda x: np.percentile(x, q=25)
q_3 = lambda x: np.percentile(x, q=75)
features = {'value' : {'min' : np.min, 'max' : np.max, 'mean' : np.mean,
'q_1' : q_1, 'median' : np.median, 'q_3' : q_3 }}
Explanation: Principal component analysis
We extract all voxels for each subject and region directly from the nifti images and the masks generated from Freesurfer.
We use principal component analysis to transform each subject into the vector space spanned by the eigenvectors of the covariance matrix. The plots show the components along each axis in this subspace for each patient. The axis are ordered according to the value of the eigenvectors.
We choose a few statistical quantities to describe each region
End of explanation
path_to_tracer_data = './data/func'
items = os.listdir(path_to_tracer_data)
subjects = hd_classifier.make_subjects(items, path_to_tracer_data)
image_filter = '^r.*\.lin_T1_orientOK_skullstripped_norm_sm6mm.nii'
masked_region_df_pbr, masked_region_features_pbr = hd_classifier.extract_features(subjects, features, image_filter)
masked_region_features_pbr
Explanation: Extract the voxel data, calculate statistical features for all regions and subjects, and put them into a pd.DataFrame.
End of explanation
from sklearn.decomposition import PCA
pca = PCA(whiten=True)
pca.fit(masked_region_features_pbr.as_matrix())
rotated_subjects_pbr = pca.fit_transform(masked_region_features_pbr.as_matrix())
# Note that we flip the axis of the first feature, assigned randomly by the algorithm, so increases from
# controls to patients.
pca_results = pd.DataFrame(dict(first_feature=-rotated_subjects_pbr[:,0],
second_feature=rotated_subjects_pbr[:,1],
subject_id = masked_region_features_pbr.index.values))
pca_results.sort_values(by='first_feature', inplace=True)
pca_results.reset_index(drop=True, inplace=True)
pca_results = pca_results.merge(participants_df[['subject_id', 'group']], on='subject_id')
pca_results
# Make a list with all the name features as region_feature to label the plots. E.g. thalamus_min
pca_features = [(item[1] + ' ' + item[0]).title()
for item in itertools.product(list(masked_region_features_pbr.columns.levels[1]),
list(masked_region_features_pbr.columns.levels[2]))]
# Get the coefficient of each feature for the first 3 principal axis
transformed_features = sorted(zip(pca_features, *pca.components_[0:4]), key=lambda t: t[0])
Explanation: Calculate the principal component rotated basis
End of explanation
masked_region_df_pbr['subject_id'] = pd.Categorical(masked_region_df_pbr['subject_id'], pca_results.subject_id)
Explanation: Let's order the data according to the score in the plot above.
End of explanation
# Some values overlap in the final plot, so we add some ad-hoc jittering
jitters = np.zeros(14)
jitters[0] = -.01
jitters[1] = .02
jitters[2] = -.02
jitters[3] = .01
jitters[4] = -.02
jitters[5] = .02
jitters[6] = -.03
jitters[7] = .03
jitters[8] = -.01
jitters[9] = .01
jitters[10] = -.03
jitters[11] = .03
jitters[12] = -.01
jitters[13] = .02
joined = pca_results.join(pd.DataFrame(dict(jitters=jitters)))
groups = joined.groupby('group')
sns.set_style("ticks")
fig = plt.figure(figsize=(14, 6))
outer_grid = gridspec.GridSpec(1, 2, wspace=0.1, width_ratios=[4,4])
right_plot = gridspec.GridSpecFromSubplotSpec(2, 1,
subplot_spec=outer_grid[0], hspace=0, height_ratios=[1,3])
marker_size = 350
# plot the first and second components in a scatter plot
ax = plt.Subplot(fig, right_plot[1])
for name, items in groups:
ax.scatter(items.first_feature, items.second_feature, s=marker_size, alpha=0.4,
c=subject_colors[name], label=name)
for idx in range(len(pca_results.index)):
ax.text(pca_results.first_feature[idx], pca_results.second_feature[idx], str(idx+1),
horizontalalignment='center', verticalalignment='center')
ax.set_xlabel('first PCA axis')
ax.set_ylabel('second PCA axis')
ax.legend(labelspacing=1.4)
fig.add_subplot(ax)
sns.set_style("whitegrid")
# plot the first component in a line on top
ax_top = plt.Subplot(fig, right_plot[0])
for name, items in groups:
ax_top.scatter(items.first_feature, items.jitters, s=marker_size, alpha=0.4,
color=subject_colors[name], label=name)
ax_top.set_xticks([])
ax_top.set_yticks([0])
ax_top.set_yticklabels([])
ax_top.set_ylabel('')
ax_top.xaxis.set_label_coords(0.5, 0.88)
ax_top.set_xlabel('first PCA axis')
ax_top.set(xlim=ax.get_xlim())
for sp in ax_top.spines.values(): sp.set_visible(False)
fig.add_subplot(ax_top)
# plot the eigenvectors in the original feature space
number_of_components = 3
number_of_features = len(transformed_features)
left_plot = gridspec.GridSpecFromSubplotSpec(1, number_of_components,
subplot_spec=outer_grid[1], wspace=0.18)
for i in range(number_of_components):
ax = plt.Subplot(fig, left_plot[i])
ax.plot([item[i+1] if i != 0 else -item[i+1] for item in transformed_features], list(range(number_of_features)))
ax.set_title(round(pca.explained_variance_ratio_[i],2))
ax.set(xlim=(-.4, .4))
ax.set(ylim=(0, number_of_features-1))
ax.set_yticks(list(range(number_of_features)))
ax.set_yticklabels([item[0] for item in transformed_features] if i==2 else [])
ax.yaxis.tick_right()
ax.set_xticks([-0.4, -.2, 0, .2, 0.4])
ax.set_xticklabels([-0.4, -0.2, 0, 0.2, 0.4])
for i in range(0,4):
ax.axhline(y=5.5+6*i, ls='dashed', c='black', alpha=0.4)
fig.add_subplot(ax)
fig.savefig('results/figs/pca_analysis.pdf', format='pdf')
Explanation: Just a couple of features classify inflammation in patients vs controls. Even only one feature, classifies the subjects in at least two groups
End of explanation
merged = masked_region_df_pbr.merge(pca_results[['group', 'subject_id', 'first_feature']], on='subject_id')
merged.sort_values(by='first_feature', inplace=True)
merged.reset_index(drop=True, inplace=True)
regions = ['pallidum', 'putamen', 'caudate', 'thalamus']
region_data = list(map(lambda r: merged[merged['region'] == r], regions))
def plot_roi_histograms(data, ax, ax_hist):
sns.violinplot(data=data, x="subject_id", y="value", bw=.2,
scale='count', cut=1, linewidth=1, ax=ax)
groups = data.groupby('group')
for name, group in groups:
sns.kdeplot(group['value'], vertical=True, ax=ax_hist,
label=name, color=subject_colors[name],
ls=('--' if name=='pre-HD' else '-'))
fig = plt.figure(figsize=(14, 6))
# gridspec inside gridspec
outer_grid = gridspec.GridSpec(2, 2, wspace=0.1, hspace=0.1)
for i in range(4):
inner_grid = gridspec.GridSpecFromSubplotSpec(1, 2,
subplot_spec=outer_grid[i], wspace=0.0, hspace=0.0, width_ratios=[9,2],)
ax = plt.Subplot(fig, inner_grid[0])
ax_hist = plt.Subplot(fig, inner_grid[1])
plot_roi_histograms(region_data[i], ax, ax_hist)
ax.set_title(regions[i])
ax.set(ylim=(masked_region_df_pbr['value'].min(), masked_region_df_pbr['value'].max()))
ax_hist.set(ylim=(masked_region_df_pbr['value'].min(), masked_region_df_pbr['value'].max()))
ax.set_yticks([0.6, 0.8, 1, 1.2, 1.4, 1.6])
# show only xticklabels only for the lower plots and show the patient number instead of subject_id
if i in [0,1]:
ax.set_xticks([])
else:
ax.set_xticklabels(list(range(1, len(merged)+1)))
ax_hist.set_xticks([])
ax_hist.set_yticks([])
ax.set_xlabel('')
ax.set_ylabel('')
fig.add_subplot(ax)
fig.add_subplot(ax_hist)
all_axes = fig.get_axes()
#show only the outside spines
for ax in all_axes:
for sp in ax.spines.values():
sp.set_visible(False)
plt.savefig('results/figs/regions_of_interest_pbr.pdf', format='pdf')
Explanation: Region of interest analysis
End of explanation
def bipartition_means_difference(seq, size_l, size_r):
'''
Calculate the difference between the averages of a sequence bipartition
Parameters
==========
seq: numpy array like object
size_l: int size of the left part
size_r: int size of the right part
'''
if len(seq) != size_l + size_r:
raise Exception('Not a bipartition')
np.random.shuffle(seq)
left = seq[:size_l]
right = seq[-size_r:]
return (left.mean() - right.mean())
def permutatation_test(z, y, num_samples):
'''
Calculate the p-value for a permutation test with num_samples
Parameters
==========
z: numpy array like object with one group of observations
y: numpy array like object with another group of observations
num_samples: int with the number of samples
'''
pooled = np.hstack([z,y])
delta = z.mean() - y.mean()
estimates = map(lambda x: bipartition_means_difference(pooled, z.size, y.size),
range(num_samples))
count = len(list(filter(lambda x: x > delta, estimates)))
return 1.0 - float(count)/float(num_samples)
num_samples = 100000
for stat in ['median', 'mean']:
print('Permutations test p-values using the {0} as input'.format(stat))
for region in regions:
controls = np.array(masked_region_features_pbr['value'][stat][region][:3])
hd = np.array(masked_region_features_pbr['value'][stat][region][4:])
print (region, permutatation_test(controls, hd, num_samples))
Explanation: We can assess the difference between the distributions above using a permutation test. We compare the distributions for the control and HD for each region using the mean and the median of each subject as input.
End of explanation
region_volumes = masked_region_df_pbr.groupby(['subject_id', 'region']).agg('count').unstack()
# Pretty ugly fix: no time to do something smarter
region_volumes.columns = [' '.join(col).strip().split(' ')[1] for col in region_volumes.columns.values]
intracraneal_volume_df = participants_df[['subject_id', 'intracraneal_volume']].set_index('subject_id')
merged = region_volumes.join(intracraneal_volume_df)
merged = merged.div(merged.intracraneal_volume, axis='index')
normalized_region_volumes = hd_classifier.normalize(merged)
normalized_region_volumes.drop('intracraneal_volume', axis=1, inplace=True)
to_fit = normalized_region_volumes.join(pca_results[['first_feature', 'subject_id', 'group']].set_index('subject_id'))
to_fit
from statsmodels.stats.outliers_influence import summary_table
import statsmodels.api as sm
sns.set_style("ticks")
def fit_region_volume_vs_score(to_fit, region):
x = to_fit.first_feature
y = to_fit[region].values
X = sm.add_constant(x)
return (x, y, sm.OLS(y, X).fit())
x, y, re = fit_region_volume_vs_score(to_fit, 'caudate')
print(re.summary())
st, data, ss2 = summary_table(re, alpha=0.05)
fittedvalues = data[:,2]
predict_mean_se = data[:,3]
predict_mean_ci_low, predict_mean_ci_upp = data[:,4:6].T
predict_ci_low, predict_ci_upp = data[:,6:8].T
plt.plot(x, fittedvalues, 'b-', lw=1)
plt.plot(x, predict_ci_low, 'r--', lw=1.5)
plt.plot(x, predict_ci_upp, 'r--', lw=1.5)
plt.plot(x, predict_mean_ci_low, 'b--', lw=1)
plt.plot(x, predict_mean_ci_upp, 'b--', lw=1)
groups = to_fit.groupby('group')
for name, items in groups:
plt.scatter(items.first_feature, items.caudate, s=marker_size, alpha=0.4,
c=subject_colors[name], label=name)
for idx in range(len(to_fit)):
plt.text(x[idx], y[idx], str(idx+1),
horizontalalignment='center', verticalalignment='center')
plt.xlabel('first PCA axis')
plt.ylabel('caudate volume')
plt.legend(labelspacing=1.4)
plt.savefig('results/figs/region_volumes_vs_inflammation_scores.pdf', format='pdf')
Explanation: Correlation between caudate volumes and inflammation
The volume of each region as it has a lot of variability, even among subjects with similar disease progression. We normalize the volumes and show the correlations between the value of the classifier for inflammation
End of explanation
for region in ['pallidum', 'putamen', 'thalamus']:
_, _, re = fit_region_volume_vs_score(to_fit, region)
print('Correlation between ' + region + ' volume and first PC')
print(re.summary())
print('+' * 91)
Explanation: Calculate the results of the regression for the other regions
End of explanation
from functools import reduce
from nilearn.image import resample_to_img, new_img_like
from nilearn.masking import apply_mask
def merge_nuclei_masks(nuclei, nuclei_masks, imgs):
masks_to_join = []
for k in nuclei:
resampled = resample_to_img(nuclei_masks[k], imgs[0])
masks_to_join.append(resampled.get_data() > 0.5)
joint_mask_data_as_bool = reduce(np.logical_or, masks_to_join)
joint_mask_as_bool = new_img_like(imgs[0], joint_mask_data_as_bool)
masked_nuclei = apply_mask(imgs, joint_mask_as_bool)
return masked_nuclei
def find_threshold(imgs, q):
from math import floor
parts = []
for img in imgs:
threshold_index = floor(q * len(img))
parts.append(np.partition(img, threshold_index)[threshold_index:])
merged = np.concatenate(parts)
merged_threshold = floor (1- len(merged) / len(parts))
return np.partition(merged, merged_threshold)[merged_threshold]
def make_masked_nuclei_imgs(imgs, path_to_morel_atlas, excludes):
if not os.path.isdir(path_to_morel_atlas):
raise FileNotFoundError
left_volumes = glob.glob(os.path.join(path_to_morel_atlas, 'left-vols-1mm/*.nii.gz'))
right_volumes = glob.glob(os.path.join(path_to_morel_atlas,'right-vols-1mm/*.nii.gz'))
def parse_nuclei_name(vol):
return os.path.dirname(vol).split('/')[-1].split('-')[0] + '_' + os.path.basename(vol).split('.')[0]
nuclei_mask_dict = { parse_nuclei_name(vol) : vol
for vol in left_volumes + right_volumes
if not ''.join(parse_nuclei_name(vol).split('_')[1:]).startswith(tuple(excludes)) }
nuclei_masks = { k: nib.load(v) for k, v in nuclei_mask_dict.items() }
masked_img = {}
nuclei_sizes = {}
for k, v in nuclei_masks.items():
resampled = resample_to_img(v, imgs[0])
resampled_data_as_bool = resampled.get_data() > 0.5
nuclei_sizes[k] = np.sum(resampled_data_as_bool)
resampled_as_bool = new_img_like(resampled, resampled_data_as_bool)
try:
masked_img[k] = apply_mask(imgs, resampled_as_bool)
except:
print('Something is wrong for nucleus {}, but I can keep going for the rest.'.format(k))
continue
return masked_img, nuclei_masks, nuclei_sizes
path_to_tracer_data = './data/func'
items = os.listdir(path_to_tracer_data)
subjects = hd_classifier.make_subjects(items, path_to_tracer_data)
mni_image_filter = 'nl_MNI152_norm_sm6mm.nii'
subjects.sort(key=lambda s: pca_results[pca_results['subject_id']==s.subject_id].index.tolist()[0])
assert ([s.subject_id for s in subjects] == list(pca_results['subject_id']))
mni_images = map(lambda s: hd_classifier.find_masks(s.images, mni_image_filter), subjects)
imgs = [nib.load(image) for image in list(itertools.chain.from_iterable(mni_images))]
path_to_morel_atlas = './data/private/Atlas/Morel'
try:
excluded_nuclei = [] # = ['global', 'MAX']
masked_img, nuclei_masks, nuclei_sizes = make_masked_nuclei_imgs(imgs, path_to_morel_atlas, excluded_nuclei)
except FileNotFoundError:
print("You need a copy of the Morel Atlas")
Explanation: Thalamic nuclei analysis
Get the images in MNI space, instead of subject space, so we can use the atlas, and then apply all the masks in the atlas to each of the subject's MNI images. The result is a dictionary masked_imgs with the masked images for each nuclei and patient.
As the license of the Morel atlas does not allow redistribution, you need to get a licensed copy of the Morel atlas before using the code below to generate the plots yourself.
End of explanation
relevant_nuclei_group_names = ['left_VLpv', 'left_VApc', 'left_PuL']
relevant_nuclei_groups = { k: [it for it in list(nuclei_masks.keys()) if k in it] for k in relevant_nuclei_group_names }
masked_relevant_groups = {k: merge_nuclei_masks(v, nuclei_masks, imgs) for k, v in relevant_nuclei_groups.items()}
fig = plt.figure(figsize=(10, 7))
num_subjects = len(subjects)
# gridspec two split all patients and averages
outer_grid = gridspec.GridSpec(1, len(masked_relevant_groups), wspace=0.09, hspace=0)
min_x, max_x = 0.4, 2
pallete = sns.color_palette("hls", num_subjects)
grey_shadow = '#857e7e'
for j, (k, v) in enumerate(masked_relevant_groups.items()):
column_grid = gridspec.GridSpecFromSubplotSpec(num_subjects, 1,
subplot_spec=outer_grid[j], wspace=0, hspace=0.0)
for i in range(0, num_subjects):
threshold = find_threshold(masked_img[k][:6], 0.90)
ax = plt.Subplot(fig, column_grid[i])
ax.set(xlim=(min_x, max_x))
if i == 0: ax.set_title(k)
if i == num_subjects - 1:
ax.set_xticks([min_x, threshold, max_x])
ax.set_xticklabels([min_x, round(threshold, 2), max_x])
else:
ax.set_xticks([])
ax.set_yticks([])
ax.set_yticklabels([])
ax.set_xlabel('')
if j == 0:
ax.set_ylabel(str(i+1), rotation='horizontal')
ax.axvline(x=threshold, color=grey_shadow, ls=':')
sns.kdeplot(v[i], shade=True, color=pallete[i], # use the same colors as before to identify subjects
ax=ax)
#if 'left_PuL' in k:
# sns.kdeplot(masked_img['right_PuL'][i], shade=True, color=grey_shadow, ax=ax)
sns.kdeplot(masked_img['left_global'][i], ls='--', ax=ax, color=grey_shadow)
fig.add_subplot(ax)
plt.savefig('results/figs/nuclei.pdf', format='pdf')
Explanation: Plot the histogram of uptake for each relevant nuclei and subject. For the remaining nuclei in the Morel Atlas differences between controls and patients are not as marked as for those selected. The vertical lines are a guide ot the eye and mark the value of SUVR corresponding to the 95 percentile of all voxels in the control group.
End of explanation |
15,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
15,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following
Step1: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: Now, let us take a look at what the dataset looks like (Note
Step4: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note
Step5: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step6: We convert both the training and validation sets into NumPy arrays.
Warning
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands
Step8: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is
Step9: Quiz question
Step10: Quiz question
Step11: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
Step12: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
Step13: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step14: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
Step15: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step16: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
Step17: Quiz Question
Step18: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models. | Python Code:
from __future__ import division
import graphlab
Explanation: Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
products
Explanation: Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).
End of explanation
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
End of explanation
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
product = feature_matrix.dot(coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1 / (1 + np.exp(-product))
return predictions
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-4-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
derivative = sum(feature * errors)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
derivative = derivative - (2 * l2_penalty * coefficient)
return derivative
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
Explanation: Quiz question: In the code above, was the intercept term regularized?
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
coefficients[j] = coefficients[j] + (step_size * derivative)
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
positive_words = table.topk('coefficients [L2=0]', 5, reverse = False)['word']
negative_words = table.topk('coefficients [L2=0]', 5, reverse = True)['word']
print positive_words
print negative_words
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
Explanation: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
End of explanation
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
Explanation: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
End of explanation
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
Explanation: Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Based on the above, we will use the same code that was used in Module 3 assignment.
End of explanation
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
Explanation: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
End of explanation |
15,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load Data for PyLadies and their local Python User Groups
Step1: Get date for when the PyLadies group was started
Step2: Create some dataframes in Pandas
WARNING I do not know how to properly panda. This is my hacky attempt.
Step3: Now let's graph
Step4: To put it all together now
I'm only going to do a handful, not all the groups. But this should give you an idea.
Also - some shit breaks. Surely I'll fix it in my Copious Amount of Free Time™ | Python Code:
DATA_DIR = "meetup_data"
MEMBER_JSON = "pug_members.json"
GROUP_DIRS = [d for d in os.listdir(DATA_DIR)]
PYLADIES_GROUPS = []
for group in GROUP_DIRS:
if os.path.isdir(os.path.join(DATA_DIR, group)):
PYLADIES_GROUPS.append(group)
def load_group_data(pyladies_group):
pyladies_dir = os.path.join(DATA_DIR, pyladies_group)
members_json = os.path.join(pyladies_dir, MEMBER_JSON)
with open(members_json, "r") as f:
return json.load(f)
Explanation: Load Data for PyLadies and their local Python User Groups
End of explanation
def pyladies_created(pyladies_group):
pyladies_dir = os.path.join(DATA_DIR, pyladies_group)
pyladies_file = os.path.join(pyladies_dir, pyladies_group + ".json")
with open(pyladies_file, "r") as f:
data = json.load(f)
created_ms = data.get("created") # in ms after epoch
created_s = created_ms / 1000
created = datetime.datetime.fromtimestamp(created_s)
year = created.strftime('%Y')
month = created.strftime('%m')
day = created.strftime('%d')
return year, month, day
Explanation: Get date for when the PyLadies group was started
End of explanation
# helper function to create a dataframe out of multiple data frames
# one DF per PUG
def _create_dataframe(group, data):
df = pd.read_json(json.dumps(data))
joined = df.loc[:,("id", "joined")]
joined["joined"] = df.joined.apply(lambda x: pd.to_datetime([x], unit="ms"))
joined["mon"] = joined.joined.apply(lambda x: x.month[0])
joined["year"] = joined.joined.apply(lambda x: x.year[0])
agg_joined = joined.groupby(["year", "mon"]).count()
return agg_joined
def collect_dataframes(group_data):
dfs = []
for group in group_data.keys():
data = group_data.get(group)[0]
df = _create_dataframe(group, data)
tmp = {}
tmp[group] = df
dfs.append(tmp)
return dfs
# aggregate dataframes, name columns nicely, etc
def aggregate_dataframes(dfs):
first = dfs.pop(0)
name = first.keys()[0]
_df = first.values()[0]
df = _df.loc[:, ("id", "joined")] # multi indices are hard.
df.rename(columns={"joined": name}, inplace=True)
for d in dfs:
name = d.keys()[0]
_df = d.values()[0]
df[name] = _df["joined"]
df = df.fillna(0)
df.drop('id', axis=1, inplace=True)
return df
Explanation: Create some dataframes in Pandas
WARNING I do not know how to properly panda. This is my hacky attempt.
End of explanation
# helper function for x-axis labels
def _get_x_axis(current):
updated = []
for item in current:
_date = item.get_text() # u'(2009, 2)'
if _date == "":
updated.append(_date)
else:
_date = _date.strip("(").strip(")").split(",") # [u'2009', u' 2'] # NOQA
_date = [d.strip() for d in _date] # [2009, 2]
label = "{0}-{1}".format(_date[1], _date[0])
updated.append(label)
return updated
# helper function to plot created date annotation
def _created_xy(df, created):
year, month, _ = created
indexlist = df.index.tolist()
created_index = indexlist.index((int(year), int(month)))
return created_index
# helper function to position the annotated text
def _find_max(df, groups):
maxes = [df[g].max() for g in groups]
return max(maxes)
def create_graph(df, pyladies, created, groups):
created = _created_xy(df, created)
created_yarrow = int(round(_find_max(df, groups) * .80))
created_ylabel = int(round(created_yarrow * .80))
graph = df.plot(figsize=(17, 8), linewidth=4)
graph.set_title(pyladies)
xlabels = _get_x_axis(graph.get_xticklabels())
graph.set_xticklabels(xlabels, rotation=45)
graph.set_xlabel("Month")
graph.set_ylabel("# of members joined")
for i, line in enumerate(graph.get_lines()):
line.set_linewidth(3)
graph.legend() # update legend with line weight changes
graph.axvline(x=created, ymin=0.0, ymax=1.0, linewidth=4)
graph.annotate("PyLadies Created", (created, created_yarrow),
xytext=(created - 8, created_ylabel), xycoords="data",
arrowprops=dict(arrowstyle="->", facecolor='black', linewidth=3))
# needs `%matplotlib inline`
plt.show()
# if you'd rather save the graph as an image locally
# output = os.path.join(self.pyladies_dir, pyladies + ".png")
# plt.savefig(output, dpi=300)
Explanation: Now let's graph
End of explanation
%matplotlib inline
def create_pyladies_graph(pyladies):
group_data = load_group_data(pyladies)
created = pyladies_created(pyladies)
dfs = collect_dataframes(group_data)
df = aggregate_dataframes(dfs)
create_graph(df, pyladies, created, group_data.keys())
create_pyladies_graph(PYLADIES_GROUPS[2])
create_pyladies_graph(PYLADIES_GROUPS[11])
create_pyladies_graph(PYLADIES_GROUPS[14])
create_pyladies_graph(PYLADIES_GROUPS[15])
create_pyladies_graph(PYLADIES_GROUPS[19])
create_pyladies_graph(PYLADIES_GROUPS[22])
create_pyladies_graph(PYLADIES_GROUPS[28])
create_pyladies_graph(PYLADIES_GROUPS[30])
create_pyladies_graph(PYLADIES_GROUPS[39])
Explanation: To put it all together now
I'm only going to do a handful, not all the groups. But this should give you an idea.
Also - some shit breaks. Surely I'll fix it in my Copious Amount of Free Time™ :D
End of explanation |