Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import PDK
To import a PDK in gdsfactory you need 2 things
Step1: You can write GDS files only
Step2: Or GDS with YAML metadata information (ports, settings, cells ...)
Step3: This created a mzi.yml file that contains
Step4: import_gds
You can read GDS files into gdsfactory thanks to the import_gds function
import_gds with YAML metadata
import_gds reads the same GDS file from disk without losing any information
Step5: import_gds and add ports from pins
Sometimes the GDS does not have YAML metadata, therefore you need to figure out the port locations, widths and orientations.
gdsfactory provides you with functions that will add ports to the component by looking for pins shapes on a specific layers (port_markers or pins)
There are different pin standards supported to automatically add ports to components
Step6: Import PDK
Foundries provide PDKs in different formats and commercial tools.
The easiest way to import a PDK into gdsfactory is to
have each GDS cell into a separate GDS file
have one GDS file with all the cells inside
Have a klayout layermap. Makes easier to create the layermap.
With that you can easily create the PDK as as python package.
Thanks to having a gdsfactory PDK as a python package you can
version control your PDK using GIT to keep track of changes and work on a team
write tests of your pdk components to avoid unwanted changes from one component to another.
ensure you maintain the quality of the PDK with continous integration checks
pin the version of gdsfactory, so new updates of gdsfactory won't affect your code
name your PDK version using semantic versioning. For example patches increase the last number (0.0.1 -> 0.0.2)
install your PDK easily pip install pdk_fab_a and easily interface with other tools
To create a Python package you can start from a customizable template (thanks to cookiecutter)
I usually create a python package by running this 2 commands inside a terminal
pip install cookiecutter
cookiecutter https | Python Code:
import gdsfactory as gf
c = gf.components.mzi()
c
Explanation: Import PDK
To import a PDK in gdsfactory you need 2 things:
GDS file with all the cells that you want to import in the PDK (or separate GDS files, one per cell)
Klayout layer properties files, to define the Layers that you can use when creating new custom Components.
GDS
GDS files are great for describing geometry thanks to the concept of References, where you store any geometry only once in memory.
For storing device metadata (settings, port locations, port widths, port angles ...) there is no clear standard.
gdsfactory stores the that metadata in YAML files, and also has functions to add pins
Component.write_gds() saves GDS
Component.write_gds_metadata() save GDS + YAML metadata
End of explanation
gdspath = c.write_gds("extra/mzi.gds")
Explanation: You can write GDS files only
End of explanation
gdspath = c.write_gds_with_metadata("extra/mzi.gds")
Explanation: Or GDS with YAML metadata information (ports, settings, cells ...)
End of explanation
c.metadata.keys()
Explanation: This created a mzi.yml file that contains:
- ports
- cells (flat list of cells)
- info (function name, module, changed settings, full settings, default settings)
End of explanation
gf.clear_cache()
c = gf.import_gds(gdspath)
c
import gdsfactory as gf
c2 = gf.import_gds(gdspath, name="mzi_sample")
c2
c2.name
c3 = gf.routing.add_fiber_single(c2)
c3
gdspath = c3.write_gds_with_metadata("extra/pdk.gds")
gf.mask.write_labels(gdspath, layer_label=gf.LAYER.LABEL)
Explanation: import_gds
You can read GDS files into gdsfactory thanks to the import_gds function
import_gds with YAML metadata
import_gds reads the same GDS file from disk without losing any information
End of explanation
import gdsfactory as gf
c = gf.components.straight(
decorator=gf.add_pins.add_pins
) # add pins inside the component
c
gdspath = c.write_gds("extra/wg.gds")
gf.clear_cache()
c2 = gf.import_gds(gdspath)
c2
c2.ports # import_gds does not automatically add the pins
c3 = gf.import_gds(gdspath, decorator=gf.add_ports.add_ports_from_markers_inside)
c3
c3.ports
Explanation: import_gds and add ports from pins
Sometimes the GDS does not have YAML metadata, therefore you need to figure out the port locations, widths and orientations.
gdsfactory provides you with functions that will add ports to the component by looking for pins shapes on a specific layers (port_markers or pins)
There are different pin standards supported to automatically add ports to components:
PINs towards the inside of the port (port at the outer part of the PIN)
PINs with half of the pin inside and half outside (port at the center of the PIN)
PIN with only labels (no shapes). You have to manually specify the width of the port.
Lets add pins, save a GDS and then import it back.
End of explanation
import gdsfactory as gf
from gdsfactory.layers import lyp_to_dataclass
from gdsfactory.config import PATH
print(lyp_to_dataclass(PATH.klayout_lyp))
# lets create a sample PDK (for demo purposes only) using GDSfactory
# if the PDK is in a commercial tool you can also do this. Make sure you save a single pdk.gds
sample_pdk_cells = gf.grid(
[
gf.components.straight,
gf.components.bend_euler,
gf.components.grating_coupler_elliptical,
]
)
sample_pdk_cells.write_gds("extra/pdk.gds")
sample_pdk_cells
sample_pdk_cells.get_dependencies()
# we write the sample PDK into a single GDS file
gf.clear_cache()
gf.write_cells.write_cells(gdspath="extra/pdk.gds", dirpath="extra/gds")
# Lets generate the script that we need to have to each GDS cell into gdsfactory
import gdsfactory as gf
print(gf.write_cells.get_import_gds_script("extra/gds"))
Explanation: Import PDK
Foundries provide PDKs in different formats and commercial tools.
The easiest way to import a PDK into gdsfactory is to
have each GDS cell into a separate GDS file
have one GDS file with all the cells inside
Have a klayout layermap. Makes easier to create the layermap.
With that you can easily create the PDK as as python package.
Thanks to having a gdsfactory PDK as a python package you can
version control your PDK using GIT to keep track of changes and work on a team
write tests of your pdk components to avoid unwanted changes from one component to another.
ensure you maintain the quality of the PDK with continous integration checks
pin the version of gdsfactory, so new updates of gdsfactory won't affect your code
name your PDK version using semantic versioning. For example patches increase the last number (0.0.1 -> 0.0.2)
install your PDK easily pip install pdk_fab_a and easily interface with other tools
To create a Python package you can start from a customizable template (thanks to cookiecutter)
I usually create a python package by running this 2 commands inside a terminal
pip install cookiecutter
cookiecutter https://github.com/joamatab/cookiecutter-pypackage-minimal
It will ask you some questions to fill in the template (name of the package being the most important)
Then you can add the information about the GDS files and the Layers inside that package
End of explanation |
15,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev2 toc-item"><a href="#Setup" data-toc-modified-id="Setup-01"><span class="toc-item-num">0.1 </span>Setup</a></div><div class="lev2 toc-item"><a href="#3-char-model" data-toc-modified-id="3-char-model-02"><span class="toc-item-num">0.2 </span>3 char model</a></div><div class="lev3 toc-item"><a href="#Create-inputs" data-toc-modified-id="Create-inputs-021"><span class="toc-item-num">0.2.1 </span>Create inputs</a></div><div class="lev3 toc-item"><a href="#Create-and-train-model" data-toc-modified-id="Create-and-train-model-022"><span class="toc-item-num">0.2.2 </span>Create and train model</a></div><div class="lev3 toc-item"><a href="#Test-model" data-toc-modified-id="Test-model-023"><span class="toc-item-num">0.2.3 </span>Test model</a></div><div class="lev2 toc-item"><a href="#Our-first-RNN!" data-toc-modified-id="Our-first-RNN!-03"><span class="toc-item-num">0.3 </span>Our first RNN!</a></div><div class="lev3 toc-item"><a href="#Create-inputs" data-toc-modified-id="Create-inputs-031"><span class="toc-item-num">0.3.1 </span>Create inputs</a></div><div class="lev3 toc-item"><a href="#Create-and-train-model" data-toc-modified-id="Create-and-train-model-032"><span class="toc-item-num">0.3.2 </span>Create and train model</a></div><div class="lev3 toc-item"><a href="#Test-model" data-toc-modified-id="Test-model-033"><span class="toc-item-num">0.3.3 </span>Test model</a></div><div class="lev2 toc-item"><a href="#Our-first-RNN-with-keras!" data-toc-modified-id="Our-first-RNN-with-keras!-04"><span class="toc-item-num">0.4 </span>Our first RNN with keras!</a></div><div class="lev2 toc-item"><a href="#Returning-sequences" data-toc-modified-id="Returning-sequences-05"><span class="toc-item-num">0.5 </span>Returning sequences</a></div><div class="lev3 toc-item"><a href="#Create-inputs" data-toc-modified-id="Create-inputs-051"><span class="toc-item-num">0.5.1 </span>Create inputs</a></div><div class="lev3 toc-item"><a href="#Create-and-train-model" data-toc-modified-id="Create-and-train-model-052"><span class="toc-item-num">0.5.2 </span>Create and train model</a></div><div class="lev3 toc-item"><a href="#Test-model" data-toc-modified-id="Test-model-053"><span class="toc-item-num">0.5.3 </span>Test model</a></div><div class="lev3 toc-item"><a href="#Sequence-model-with-keras" data-toc-modified-id="Sequence-model-with-keras-054"><span class="toc-item-num">0.5.4 </span>Sequence model with keras</a></div><div class="lev3 toc-item"><a href="#One-hot-sequence-model-with-keras" data-toc-modified-id="One-hot-sequence-model-with-keras-055"><span class="toc-item-num">0.5.5 </span>One-hot sequence model with keras</a></div><div class="lev2 toc-item"><a href="#Stateful-model-with-keras" data-toc-modified-id="Stateful-model-with-keras-06"><span class="toc-item-num">0.6 </span>Stateful model with keras</a></div><div class="lev2 toc-item"><a href="#Theano-RNN" data-toc-modified-id="Theano-RNN-07"><span class="toc-item-num">0.7 </span>Theano RNN</a></div><div class="lev2 toc-item"><a href="#Pure-python-RNN!" data-toc-modified-id="Pure-python-RNN!-08"><span class="toc-item-num">0.8 </span>Pure python RNN!</a></div><div class="lev3 toc-item"><a href="#Set-up-basic-functions" data-toc-modified-id="Set-up-basic-functions-081"><span class="toc-item-num">0.8.1 </span>Set up basic functions</a></div><div class="lev3 toc-item"><a href="#Set-up-training" data-toc-modified-id="Set-up-training-082"><span class="toc-item-num">0.8.2 </span>Set up training</a></div><div class="lev2 toc-item"><a href="#Keras-GRU" data-toc-modified-id="Keras-GRU-09"><span class="toc-item-num">0.9 </span>Keras GRU</a></div><div class="lev2 toc-item"><a href="#Theano-GRU" data-toc-modified-id="Theano-GRU-010"><span class="toc-item-num">0.10 </span>Theano GRU</a></div><div class="lev3 toc-item"><a href="#Separate-weights" data-toc-modified-id="Separate-weights-0101"><span class="toc-item-num">0.10.1 </span>Separate weights</a></div><div class="lev3 toc-item"><a href="#Combined-weights" data-toc-modified-id="Combined-weights-0102"><span class="toc-item-num">0.10.2 </span>Combined weights</a></div><div class="lev3 toc-item"><a href="#End" data-toc-modified-id="End-0103"><span class="toc-item-num">0.10.3 </span>End</a></div>
Step1: Setup
We're going to download the collected works of Nietzsche to use as our data for this class.
Step2: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
Step3: Map from chars to indices and back again
Step4: idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)
Step5: 3 char model
Create inputs
Create a list of every 4th character, starting at the 0th, 1st, 2nd, then 3rd characters
Step6: Our inputs
Step7: Our output
Step8: The first 4 inputs and outputs
Step9: The number of latent factors to create (i.e. the size of the embedding matrix)
Step10: Create inputs and embedding outputs for each of our 3 character inputs
Step11: Create and train model
Pick a size for our hidden state
Step12: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
Step13: Our first hidden activation is simply this function applied to the result of the embedding of the first character.
Step14: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
Step15: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
Step16: This is the 'blue arrow' from our diagram - the layer operation from hidden to output.
Step17: The third hidden state is the input to our output layer.
Step18: Test model
Step19: Our first RNN!
Create inputs
This is the size of our unrolled RNN.
Step20: For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.
Step21: Then create a list of the next character in each of these series. This will be the labels for our model.
Step22: So each column below is one series of 8 characters from the text.
Step23: ...and this is the next character after each sequence.
Step24: Create and train model
Step25: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Step26: Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
Step27: Putting the final hidden state through dense_out() gives us our output.
Step28: So now we can create our model.
Step29: Test model
Step30: Our first RNN with keras!
Step31: This is nearly exactly equivalent to the RNN we built ourselves in the previous section.
Step32: Returning sequences
Create inputs
To use a sequence model, we can leave our input unchanged - but we have to change our output to a sequence (of course!)
Here, c_out_dat is identical to c_in_dat, but moved across 1 character.
Step33: Reading down each column shows one set of inputs and outputs.
Step34: Create and train model
Step35: We're going to pass a vector of all zeros as our starting point - here's our input layers for that
Step36: Test model
Step37: Sequence model with keras
Step38: To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.
Step39: One-hot sequence model with keras
This is the keras version of the theano model that we're about to create.
Step40: Stateful model with keras
Step41: A stateful model is easy to create (just add "stateful=True") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.
When using stateful in keras, you have to also add 'batch_input_shape' to the first layer, and fix the batch size there.
Step42: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.
Step43: Theano RNN
Step44: Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).
The return values are wrapped in shared(), which is how we tell theano that it can manage this data (copying it to and from the GPU as necessary).
Step45: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
Step46: Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation
Step47: Now we're ready to create our intial weight matrices.
Step48: Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character
Step49: Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
Step50: We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!
Step51: We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.
Step52: We're finally ready to compile the function!
Step53: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
Step54: Pure python RNN!
Set up basic functions
Now we're going to try to repeat the above theano RNN, using just pure python (and numpy). Which means, we have to do everything ourselves, including defining the basic functions of a neural net! Below are all of the definitions, along with tests to check that they give the same answers as theano. The functions ending in _d are the derivatives of each function.
Step55: We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement
Step56: ...for instance, scan on + is the cumulative sum.
Step57: Set up training
Let's now build the functions to do the forward and backward passes of our RNN. First, define our data and shape.
Step58: Here's the function to do a single forward pass of an RNN, for a single character.
Step59: We use scan to apply the above to a whole sequence of characters.
Step60: Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.
Step61: Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.
Step62: Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.
Step63: Keras GRU
Identical to the last keras rnn, but a GRU!
Step64: Theano GRU
Separate weights
The theano GRU looks just like the simple theano RNN, except for the use of the reset and update gates. Each of these gates requires its own hidden and input weights, so we add those to our weight matrices.
Step65: Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.
Step66: Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.
Step67: Everything from here on is identical to our simple RNN in theano.
Step68: Combined weights
We can make the previous section simpler and faster by concatenating the hidden and input matrices and inputs together. We're not going to step through this cell by cell - you'll see it's identical to the previous section except for this concatenation. | Python Code:
from theano.sandbox import cuda
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
Explanation: Table of Contents
<p><div class="lev2 toc-item"><a href="#Setup" data-toc-modified-id="Setup-01"><span class="toc-item-num">0.1 </span>Setup</a></div><div class="lev2 toc-item"><a href="#3-char-model" data-toc-modified-id="3-char-model-02"><span class="toc-item-num">0.2 </span>3 char model</a></div><div class="lev3 toc-item"><a href="#Create-inputs" data-toc-modified-id="Create-inputs-021"><span class="toc-item-num">0.2.1 </span>Create inputs</a></div><div class="lev3 toc-item"><a href="#Create-and-train-model" data-toc-modified-id="Create-and-train-model-022"><span class="toc-item-num">0.2.2 </span>Create and train model</a></div><div class="lev3 toc-item"><a href="#Test-model" data-toc-modified-id="Test-model-023"><span class="toc-item-num">0.2.3 </span>Test model</a></div><div class="lev2 toc-item"><a href="#Our-first-RNN!" data-toc-modified-id="Our-first-RNN!-03"><span class="toc-item-num">0.3 </span>Our first RNN!</a></div><div class="lev3 toc-item"><a href="#Create-inputs" data-toc-modified-id="Create-inputs-031"><span class="toc-item-num">0.3.1 </span>Create inputs</a></div><div class="lev3 toc-item"><a href="#Create-and-train-model" data-toc-modified-id="Create-and-train-model-032"><span class="toc-item-num">0.3.2 </span>Create and train model</a></div><div class="lev3 toc-item"><a href="#Test-model" data-toc-modified-id="Test-model-033"><span class="toc-item-num">0.3.3 </span>Test model</a></div><div class="lev2 toc-item"><a href="#Our-first-RNN-with-keras!" data-toc-modified-id="Our-first-RNN-with-keras!-04"><span class="toc-item-num">0.4 </span>Our first RNN with keras!</a></div><div class="lev2 toc-item"><a href="#Returning-sequences" data-toc-modified-id="Returning-sequences-05"><span class="toc-item-num">0.5 </span>Returning sequences</a></div><div class="lev3 toc-item"><a href="#Create-inputs" data-toc-modified-id="Create-inputs-051"><span class="toc-item-num">0.5.1 </span>Create inputs</a></div><div class="lev3 toc-item"><a href="#Create-and-train-model" data-toc-modified-id="Create-and-train-model-052"><span class="toc-item-num">0.5.2 </span>Create and train model</a></div><div class="lev3 toc-item"><a href="#Test-model" data-toc-modified-id="Test-model-053"><span class="toc-item-num">0.5.3 </span>Test model</a></div><div class="lev3 toc-item"><a href="#Sequence-model-with-keras" data-toc-modified-id="Sequence-model-with-keras-054"><span class="toc-item-num">0.5.4 </span>Sequence model with keras</a></div><div class="lev3 toc-item"><a href="#One-hot-sequence-model-with-keras" data-toc-modified-id="One-hot-sequence-model-with-keras-055"><span class="toc-item-num">0.5.5 </span>One-hot sequence model with keras</a></div><div class="lev2 toc-item"><a href="#Stateful-model-with-keras" data-toc-modified-id="Stateful-model-with-keras-06"><span class="toc-item-num">0.6 </span>Stateful model with keras</a></div><div class="lev2 toc-item"><a href="#Theano-RNN" data-toc-modified-id="Theano-RNN-07"><span class="toc-item-num">0.7 </span>Theano RNN</a></div><div class="lev2 toc-item"><a href="#Pure-python-RNN!" data-toc-modified-id="Pure-python-RNN!-08"><span class="toc-item-num">0.8 </span>Pure python RNN!</a></div><div class="lev3 toc-item"><a href="#Set-up-basic-functions" data-toc-modified-id="Set-up-basic-functions-081"><span class="toc-item-num">0.8.1 </span>Set up basic functions</a></div><div class="lev3 toc-item"><a href="#Set-up-training" data-toc-modified-id="Set-up-training-082"><span class="toc-item-num">0.8.2 </span>Set up training</a></div><div class="lev2 toc-item"><a href="#Keras-GRU" data-toc-modified-id="Keras-GRU-09"><span class="toc-item-num">0.9 </span>Keras GRU</a></div><div class="lev2 toc-item"><a href="#Theano-GRU" data-toc-modified-id="Theano-GRU-010"><span class="toc-item-num">0.10 </span>Theano GRU</a></div><div class="lev3 toc-item"><a href="#Separate-weights" data-toc-modified-id="Separate-weights-0101"><span class="toc-item-num">0.10.1 </span>Separate weights</a></div><div class="lev3 toc-item"><a href="#Combined-weights" data-toc-modified-id="Combined-weights-0102"><span class="toc-item-num">0.10.2 </span>Combined weights</a></div><div class="lev3 toc-item"><a href="#End" data-toc-modified-id="End-0103"><span class="toc-item-num">0.10.3 </span>End</a></div>
End of explanation
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
Explanation: Setup
We're going to download the collected works of Nietzsche to use as our data for this class.
End of explanation
chars.insert(0, "\0")
''.join(chars[1:-6])
Explanation: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
End of explanation
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
Explanation: Map from chars to indices and back again
End of explanation
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
Explanation: idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)
End of explanation
cs=3
c1_dat = [idx[i] for i in xrange(0, len(idx)-1-cs, cs)]
c2_dat = [idx[i+1] for i in xrange(0, len(idx)-1-cs, cs)]
c3_dat = [idx[i+2] for i in xrange(0, len(idx)-1-cs, cs)]
c4_dat = [idx[i+3] for i in xrange(0, len(idx)-1-cs, cs)]
Explanation: 3 char model
Create inputs
Create a list of every 4th character, starting at the 0th, 1st, 2nd, then 3rd characters
End of explanation
x1 = np.stack(c1_dat[:-2])
x2 = np.stack(c2_dat[:-2])
x3 = np.stack(c3_dat[:-2])
Explanation: Our inputs
End of explanation
y = np.stack(c4_dat[:-2])
Explanation: Our output
End of explanation
x1[:4], x2[:4], x3[:4]
y[:4]
x1.shape, y.shape
Explanation: The first 4 inputs and outputs
End of explanation
n_fac = 42
Explanation: The number of latent factors to create (i.e. the size of the embedding matrix)
End of explanation
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name)
emb = Embedding(n_in, n_out, input_length=1)(inp)
return inp, Flatten()(emb)
c1_in, c1 = embedding_input('c1', vocab_size, n_fac)
c2_in, c2 = embedding_input('c2', vocab_size, n_fac)
c3_in, c3 = embedding_input('c3', vocab_size, n_fac)
Explanation: Create inputs and embedding outputs for each of our 3 character inputs
End of explanation
n_hidden = 256
Explanation: Create and train model
Pick a size for our hidden state
End of explanation
dense_in = Dense(n_hidden, activation='relu')
Explanation: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
End of explanation
c1_hidden = dense_in(c1)
Explanation: Our first hidden activation is simply this function applied to the result of the embedding of the first character.
End of explanation
dense_hidden = Dense(n_hidden, activation='tanh')
Explanation: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
End of explanation
c2_dense = dense_in(c2)
hidden_2 = dense_hidden(c1_hidden)
c2_hidden = merge([c2_dense, hidden_2])
c3_dense = dense_in(c3)
hidden_3 = dense_hidden(c2_hidden)
c3_hidden = merge([c3_dense, hidden_3])
Explanation: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
End of explanation
dense_out = Dense(vocab_size, activation='softmax')
Explanation: This is the 'blue arrow' from our diagram - the layer operation from hidden to output.
End of explanation
c4_out = dense_out(c3_hidden)
model = Model([c1_in, c2_in, c3_in], c4_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.optimizer.lr=0.000001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr=0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr = 0.000001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr = 0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
Explanation: The third hidden state is the input to our output layer.
End of explanation
def get_next(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict(arrs)
i = np.argmax(p)
return chars[i]
get_next('phi')
get_next(' th')
get_next(' an')
Explanation: Test model
End of explanation
cs=8
Explanation: Our first RNN!
Create inputs
This is the size of our unrolled RNN.
End of explanation
c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
for n in range(cs)]
Explanation: For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.
End of explanation
c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs, cs)]
xs = [np.stack(c[:-2]) for c in c_in_dat]
len(xs), xs[0].shape
y = np.stack(c_out_dat[:-2])
Explanation: Then create a list of the next character in each of these series. This will be the labels for our model.
End of explanation
[xs[n][:cs] for n in range(cs)]
Explanation: So each column below is one series of 8 characters from the text.
End of explanation
y[:cs]
n_fac = 42
Explanation: ...and this is the next character after each sequence.
End of explanation
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name+'_in')
emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)
return inp, Flatten()(emb)
c_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax')
Explanation: Create and train model
End of explanation
hidden = dense_in(c_ins[0][1])
Explanation: The first character of each sequence goes through dense_in(), to create our first hidden activations.
End of explanation
for i in range(1,cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden])
Explanation: Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
End of explanation
c_out = dense_out(hidden)
Explanation: Putting the final hidden state through dense_out() gives us our output.
End of explanation
model = Model([c[0] for c in c_ins], c_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(xs, y, batch_size=64, nb_epoch=12, verbose=2)
Explanation: So now we can create our model.
End of explanation
def get_next(inp):
idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]
p = model.predict(idxs)
return chars[np.argmax(p)]
get_next('for thos')
get_next('part of ')
get_next('queens a')
Explanation: Test model
End of explanation
n_hidden, n_fac, cs, vocab_size = (256, 42, 8, 86)
Explanation: Our first RNN with keras!
End of explanation
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation='relu', inner_init='identity'),
Dense(vocab_size, activation='softmax')
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(np.concatenate(xs,axis=1), y, batch_size=64, nb_epoch=8, verbose=2)
def get_next_keras(inp):
idxs = [char_indices[c] for c in inp]
arrs = np.array(idxs)[np.newaxis,:]
p = model.predict(arrs)[0]
return chars[np.argmax(p)]
get_next_keras('this is ')
get_next_keras('part of ')
get_next_keras('queens a')
Explanation: This is nearly exactly equivalent to the RNN we built ourselves in the previous section.
End of explanation
#c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
# for n in range(cs)]
c_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)]
for n in range(cs)]
ys = [np.stack(c[:-2]) for c in c_out_dat]
Explanation: Returning sequences
Create inputs
To use a sequence model, we can leave our input unchanged - but we have to change our output to a sequence (of course!)
Here, c_out_dat is identical to c_in_dat, but moved across 1 character.
End of explanation
[xs[n][:cs] for n in range(cs)]
[ys[n][:cs] for n in range(cs)]
Explanation: Reading down each column shows one set of inputs and outputs.
End of explanation
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax', name='output')
Explanation: Create and train model
End of explanation
inp1 = Input(shape=(n_fac,), name='zeros')
hidden = dense_in(inp1)
outs = []
for i in range(cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden], mode='sum')
# every layer now has an output
outs.append(dense_out(hidden))
model = Model([inp1] + [c[0] for c in c_ins], outs)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
model.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12, verbose=2)
Explanation: We're going to pass a vector of all zeros as our starting point - here's our input layers for that:
End of explanation
def get_nexts(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts(' this is')
get_nexts(' part of')
Explanation: Test model
End of explanation
n_hidden, n_fac, cs, vocab_size
Explanation: Sequence model with keras
End of explanation
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, return_sequences=True, activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
xs[0].shape
x_rnn=np.stack(xs, axis=1)
y_rnn=np.expand_dims(np.stack(ys, axis=1), -1)
x_rnn.shape, y_rnn.shape
model.fit(x_rnn[:,:,0], y_rnn[:,:,0], batch_size=64, nb_epoch=8, verbose=2)
def get_nexts_keras(inp):
idxs = [char_indices[c] for c in inp]
arr = np.array(idxs)[np.newaxis,:]
p = model.predict(arr)[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_keras(' this is')
Explanation: To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.
End of explanation
model=Sequential([
SimpleRNN(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
oh_ys = [to_categorical(o, vocab_size) for o in ys]
oh_y_rnn=np.stack(oh_ys, axis=1)
oh_xs = [to_categorical(o, vocab_size) for o in xs]
oh_x_rnn=np.stack(oh_xs, axis=1)
oh_x_rnn.shape, oh_y_rnn.shape
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8, verbose=2)
def get_nexts_oh(inp):
idxs = np.array([char_indices[c] for c in inp])
arr = to_categorical(idxs, vocab_size)
p = model.predict(arr[np.newaxis,:])[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_oh(' this is')
Explanation: One-hot sequence model with keras
This is the keras version of the theano model that we're about to create.
End of explanation
bs=64
Explanation: Stateful model with keras
End of explanation
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,8)),
BatchNormalization(),
LSTM(n_hidden, return_sequences=True, stateful=True),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
Explanation: A stateful model is easy to create (just add "stateful=True") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.
When using stateful in keras, you have to also add 'batch_input_shape' to the first layer, and fix the batch size there.
End of explanation
mx = len(x_rnn)//bs*bs
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
model.optimizer.lr=1e-4
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
Explanation: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.
End of explanation
n_input = vocab_size
n_output = vocab_size
Explanation: Theano RNN
End of explanation
def init_wgts(rows, cols):
scale = math.sqrt(2/rows)
return shared(normal(scale=scale, size=(rows, cols)).astype(np.float32))
def init_bias(rows):
return shared(np.zeros(rows, dtype=np.float32))
Explanation: Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).
The return values are wrapped in shared(), which is how we tell theano that it can manage this data (copying it to and from the GPU as necessary).
End of explanation
def wgts_and_bias(n_in, n_out):
return init_wgts(n_in, n_out), init_bias(n_out)
def id_and_bias(n):
return shared(np.eye(n, dtype=np.float32)), init_bias(n)
Explanation: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
End of explanation
t_inp = T.matrix('inp')
t_outp = T.matrix('outp')
t_h0 = T.vector('h0')
lr = T.scalar('lr')
all_args = [t_h0, t_inp, t_outp, lr]
Explanation: Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation:
End of explanation
W_h = id_and_bias(n_hidden)
W_x = wgts_and_bias(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W_h, W_x, W_y]))
Explanation: Now we're ready to create our intial weight matrices.
End of explanation
def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):
# Calculate the hidden activations
h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)
# Calculate the output activations
y = nnet.softmax(T.dot(h, W_y) + b_y)
# Return both (the 'Flatten()' is to work around a theano bug)
return h, T.flatten(y, 1)
Explanation: Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character:
End of explanation
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
Explanation: Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
End of explanation
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
Explanation: We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!
End of explanation
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
upd = upd_dict(w_all, g_all, lr)
Explanation: We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.
End of explanation
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
X = oh_x_rnn
Y = oh_y_rnn
X.shape, Y.shape
Explanation: We're finally ready to compile the function!
End of explanation
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.3f}".format(err/1000))
err=0.0
f_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)
pred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)
act = np.argmax(X[6], axis=1)
[indices_char[o] for o in act]
[indices_char[o] for o in pred]
Explanation: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
End of explanation
def sigmoid(x): return 1/(1+np.exp(-x))
def sigmoid_d(x):
output = sigmoid(x)
return output*(1-output)
def relu(x): return np.maximum(0., x)
def relu_d(x): return (x > 0.)*1.
relu(np.array([3.,-3.])), relu_d(np.array([3.,-3.]))
def dist(a,b): return pow(a-b,2)
def dist_d(a,b): return 2*(a-b)
import pdb
eps = 1e-7
def x_entropy(pred, actual):
return -np.sum(actual * np.log(np.clip(pred, eps, 1-eps)))
def x_entropy_d(pred, actual): return -actual/pred
def softmax(x): return np.exp(x)/np.exp(x).sum()
def softmax_d(x):
sm = softmax(x)
res = np.expand_dims(-sm,-1)*sm
res[np.diag_indices_from(res)] = sm*(1-sm)
return res
test_preds = np.array([0.2,0.7,0.1])
test_actuals = np.array([0.,1.,0.])
nnet.categorical_crossentropy(test_preds, test_actuals).eval()
x_entropy(test_preds, test_actuals)
test_inp = T.dvector()
test_out = nnet.categorical_crossentropy(test_inp, test_actuals)
test_grad = theano.function([test_inp], T.grad(test_out, test_inp))
test_grad(test_preds)
x_entropy_d(test_preds, test_actuals)
pre_pred = random(oh_x_rnn[0][0].shape)
preds = softmax(pre_pred)
actual = oh_x_rnn[0][0]
loss_d=x_entropy_d
np.allclose(softmax_d(pre_pred).dot(loss_d(preds,actual)), preds-actual)
softmax(test_preds)
nnet.softmax(test_preds).eval()
test_out = T.flatten(nnet.softmax(test_inp))
test_grad = theano.function([test_inp], theano.gradient.jacobian(test_out, test_inp))
test_grad(test_preds)
softmax_d(test_preds)
act=relu
act_d = relu_d
loss=x_entropy
Explanation: Pure python RNN!
Set up basic functions
Now we're going to try to repeat the above theano RNN, using just pure python (and numpy). Which means, we have to do everything ourselves, including defining the basic functions of a neural net! Below are all of the definitions, along with tests to check that they give the same answers as theano. The functions ending in _d are the derivatives of each function.
End of explanation
def scan(fn, start, seq):
res = []
prev = start
for s in seq:
app = fn(prev, s)
res.append(app)
prev = app
return res
Explanation: We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement:
End of explanation
scan(lambda prev,curr: prev+curr, 0, range(5))
Explanation: ...for instance, scan on + is the cumulative sum.
End of explanation
inp = oh_x_rnn
outp = oh_y_rnn
n_input = vocab_size
n_output = vocab_size
inp.shape, outp.shape
Explanation: Set up training
Let's now build the functions to do the forward and backward passes of our RNN. First, define our data and shape.
End of explanation
def one_char(prev, item):
# Previous state
tot_loss, pre_hidden, pre_pred, hidden, ypred = prev
# Current inputs and output
x, y = item
pre_hidden = np.dot(x,w_x) + np.dot(hidden,w_h)
hidden = act(pre_hidden)
pre_pred = np.dot(hidden,w_y)
ypred = softmax(pre_pred)
return (
# Keep track of loss so we can report it
tot_loss+loss(ypred, y),
# Used in backprop
pre_hidden, pre_pred,
# Used in next iteration
hidden,
# To provide predictions
ypred)
Explanation: Here's the function to do a single forward pass of an RNN, for a single character.
End of explanation
def get_chars(n): return zip(inp[n], outp[n])
def one_fwd(n): return scan(one_char, (0,0,0,np.zeros(n_hidden),0), get_chars(n))
Explanation: We use scan to apply the above to a whole sequence of characters.
End of explanation
# "Columnify" a vector
def col(x): return x[:,newaxis]
def one_bkwd(args, n):
global w_x,w_y,w_h
i=inp[n] # 8x86
o=outp[n] # 8x86
d_pre_hidden = np.zeros(n_hidden) # 256
for p in reversed(range(len(i))):
totloss, pre_hidden, pre_pred, hidden, ypred = args[p]
x=i[p] # 86
y=o[p] # 86
d_pre_pred = softmax_d(pre_pred).dot(loss_d(ypred,y)) # 86
d_pre_hidden = (np.dot(d_pre_hidden, w_h.T)
+ np.dot(d_pre_pred,w_y.T)) * act_d(pre_hidden) # 256
# d(loss)/d(w_y) = d(loss)/d(pre_pred) * d(pre_pred)/d(w_y)
w_y -= col(hidden) * d_pre_pred * alpha
# d(loss)/d(w_h) = d(loss)/d(pre_hidden[p-1]) * d(pre_hidden[p-1])/d(w_h)
if (p>0): w_h -= args[p-1][3].dot(d_pre_hidden) * alpha
w_x -= col(x)*d_pre_hidden * alpha
return d_pre_hidden
Explanation: Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.
End of explanation
scale=math.sqrt(2./n_input)
w_x = normal(scale=scale, size=(n_input,n_hidden))
w_y = normal(scale=scale, size=(n_hidden, n_output))
w_h = np.eye(n_hidden, dtype=np.float32)
Explanation: Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.
End of explanation
overallError=0
alpha=0.0001
for n in range(10000):
res = one_fwd(n)
overallError+=res[-1][0]
deriv = one_bkwd(res, n)
if(n % 1000 == 999):
print ("Error:{:.4f}; Gradient:{:.5f}".format(
overallError/1000, np.linalg.norm(deriv)))
overallError=0
Explanation: Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.
End of explanation
model=Sequential([
GRU(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8, verbose=2)
get_nexts_oh(' this is')
Explanation: Keras GRU
Identical to the last keras rnn, but a GRU!
End of explanation
W_h = id_and_bias(n_hidden)
W_x = init_wgts(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
rW_h = init_wgts(n_hidden, n_hidden)
rW_x = wgts_and_bias(n_input, n_hidden)
uW_h = init_wgts(n_hidden, n_hidden)
uW_x = wgts_and_bias(n_input, n_hidden)
w_all = list(chain.from_iterable([W_h, W_y, uW_x, rW_x]))
w_all.extend([W_x, uW_h, rW_h])
Explanation: Theano GRU
Separate weights
The theano GRU looks just like the simple theano RNN, except for the use of the reset and update gates. Each of these gates requires its own hidden and input weights, so we add those to our weight matrices.
End of explanation
def gate(x, h, W_h, W_x, b_x):
return nnet.sigmoid(T.dot(x, W_x) + b_x + T.dot(h, W_h))
Explanation: Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.
End of explanation
def step(x, h, W_h, b_h, W_y, b_y, uW_x, ub_x, rW_x, rb_x, W_x, uW_h, rW_h):
reset = gate(x, h, rW_h, rW_x, rb_x)
update = gate(x, h, uW_h, uW_x, ub_x)
h_new = gate(x, h * reset, W_h, W_x, b_h)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
Explanation: Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.
End of explanation
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.1
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
l_rate *= 0.95
print ("Error:{:.2f}".format(err/1000))
err=0.0
Explanation: Everything from here on is identical to our simple RNN in theano.
End of explanation
W = (shared(np.concatenate([np.eye(n_hidden), normal(size=(n_input, n_hidden))])
.astype(np.float32)), init_bias(n_hidden))
rW = wgts_and_bias(n_input+n_hidden, n_hidden)
uW = wgts_and_bias(n_input+n_hidden, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W, W_y, uW, rW]))
def gate(m, W, b): return nnet.sigmoid(T.dot(m, W) + b)
def step(x, h, W, b, W_y, b_y, uW, ub, rW, rb):
m = T.concatenate([h, x])
reset = gate(m, rW, rb)
update = gate(m, uW, ub)
m = T.concatenate([h*reset, x])
h_new = gate(m, W, b)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.2f}".format(err/1000))
err=0.0
Explanation: Combined weights
We can make the previous section simpler and faster by concatenating the hidden and input matrices and inputs together. We're not going to step through this cell by cell - you'll see it's identical to the previous section except for this concatenation.
End of explanation |
15,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGSLIB
QQ and PP plots
Step1: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: QQ-Plot | Python Code:
#general imports
import pygslib
Explanation: PyGSLIB
QQ and PP plots
End of explanation
#get the data in gslib format into a pandas Dataframe
cluster= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
true= pygslib.gslib.read_gslib_file('../datasets/true.dat')
true['Declustering Weight'] = 1
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
npoints = len(cluster['Primary'])
true['Declustering Weight'] = 1
#using declustering wight
parameters_qpplt = {
# gslib parameters for qq-pp calculation
'qqorpp': 0, # integer (Optional, default 0, Q-Q plot). Q-Q plot (qqorpp=0); P-P plot (qqorpp=1)
#'npts' : None, # integer (Optional, default min length of va1 and va2). Number of points to use on the Q-Q or P-P plot (should not exceed the smallest number of data in data1 / data2
'va1' : cluster['Primary'], # rank-1 array('d') with bounds (nd). Variable 1
'wt1' : cluster['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 1.
'va2' : true['Primary'], # rank-1 array('d') with bounds (nd). Variable 2
'wt2' : true['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 2.
# visual parameters for figure (if a new figure is created)
#'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
#'title' : None, # string (Optional, "QQ plot" or "PP plot"). Figure title
#'xlabel' : 'Z1', # string (Optional, default "Z1" or "P1"). X axis label
#'ylabel' : 'Z2', # string (Optional, default "Z2" or "P2"). Y axis label
#'xlog' : True, # boolean (Optional, default True). If true plot X axis in log sale.
#'ylog' : True, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
#'style' : None, # string with valid bokeh chart type
'color' : 'black', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Declustered', # string (Optional, default "NA").
#'alpha' : None, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
#'lwidth': None, # float (Optional, default 1). Line width
# leyend
'legendloc': None} # float (Optional, default 'bottom_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left
# Calculate the non declustered qq plot
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# Calculate declustered qqplot
# a) get array of ones as weights
cluster['naive']= cluster['Declustering Weight'].values*0 +1
# update parameter dic
parameters_qpplt['wt1'] = cluster['naive']
parameters_qpplt['color'] = 'blue'
parameters_qpplt['legend']='Clustered'
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# show the plot
pygslib.plothtml.show(fig)
Explanation: QQ-Plot
End of explanation |
15,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running Code
First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefor runs Python code.
Code cells allow you to enter and run code
Run a code cell using Shift-Enter or pressing the <button class='btn btn-default btn-xs'><i class="icon-step-forward fa fa-step-forward"></i></button> button in the toolbar above
Step1: There are two other keyboard shortcuts for running code
Step2: If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via
ctypes to segfault the Python interpreter
Step3: Cell menu
The "Cell" menu has a number of menu items for running code in different ways. These includes
Step4: Output is asynchronous
All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end.
Step5: Large outputs
To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output
Step6: Beyond a certain point, output will scroll automatically | Python Code:
a = 10
print(a)
Explanation: Running Code
First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefor runs Python code.
Code cells allow you to enter and run code
Run a code cell using Shift-Enter or pressing the <button class='btn btn-default btn-xs'><i class="icon-step-forward fa fa-step-forward"></i></button> button in the toolbar above:
End of explanation
import time
time.sleep(10)
Explanation: There are two other keyboard shortcuts for running code:
Alt-Enter runs the current cell and inserts a new one below.
Ctrl-Enter run the current cell and enters command mode.
Managing the Kernel
Code is run in a separate process called the Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above.
End of explanation
import sys
from ctypes import CDLL
# This will crash a Linux or Mac system
# equivalent calls can be made on Windows
# Uncomment these lines if you would like to see the segfault
# dll = 'dylib' if sys.platform == 'darwin' else 'so.6'
# libc = CDLL("libc.%s" % dll)
# libc.time(-1) # BOOM!!
Explanation: If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via
ctypes to segfault the Python interpreter:
End of explanation
print("hi, stdout")
from __future__ import print_function
print('hi, stderr', file=sys.stderr)
Explanation: Cell menu
The "Cell" menu has a number of menu items for running code in different ways. These includes:
Run and Select Below
Run and Insert Below
Run All
Run All Above
Run All Below
Restarting the kernels
The kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. This is done by clicking on the <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button> in the toolbar above.
sys.stdout and sys.stderr
The stdout and stderr streams are displayed as text in the output area.
End of explanation
import time, sys
for i in range(8):
print(i)
time.sleep(0.5)
Explanation: Output is asynchronous
All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end.
End of explanation
for i in range(50):
print(i)
Explanation: Large outputs
To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output:
End of explanation
for i in range(50):
print(2**i - 1)
Explanation: Beyond a certain point, output will scroll automatically:
End of explanation |
15,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: CSV 데이터 로드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 데이터 로드하기
시작하려면 CSV 파일의 상단을 보고 형식이 어떻게 지정되는지 봅니다.
Step3: pandas를 사용하여 로드하고 NumPy 배열을 TensorFlow에 전달할 수 있습니다. 큰 파일 세트로 확장해야 하거나 TensorFlow 및 tf.data와 통합되는 로더가 필요한 경우, tf.data.experimental.make_csv_dataset 함수를 사용합니다.
명시적으로 식별해야 하는 유일한 열은 모델에서 예측하려는 값을 가진 열입니다.
Step4: 이제 파일에서 CSV 데이터를 읽고 데이터세트를 작성합니다.
(전체 설명서는 tf.data.experimental.make_csv_dataset를 참조하세요.)
Step5: 데이터세트의 각 항목은 배치이며 (많은 예제, 많은 레이블 )의 튜플로 표현됩니다. 예제의 데이터는 행 기반 텐서가 아닌 열 기반 텐서로 구성되며, 각 데이터는 배치 크기(이 경우 5)만큼 많은 요소가 있습니다.
직접 보는 것이 도움이 될 수 있습니다.
Step6: 보시다시피 CSV의 열 이름이 지정됩니다. 데이터세트 생성자가 이들 이름을 자동으로 선택합니다. 작업 중인 파일의 첫 번째 줄에 열 이름이 포함되어 있지 않은 경우, 열 이름을 문자열 목록으로 make_csv_dataset 함수의 column_names 인수로 전달합니다.
Step7: 이 예제에서는 사용 가능한 모든 열을 사용합니다. 데이터세트에서 일부 열을 생략해야 하는 경우, 사용하려는 열의 목록만 작성하고 생성자의 (선택적) select_columns 인수로 전달합니다.
Step8: 데이터 전처리
CSV 파일은 다양한 데이터 유형을 포함할 수 있습니다. 일반적으로 데이터를 모델에 공급하기 전에 혼합 유형에서 고정 길이 벡터로 변환하려고 합니다.
TensorFlow에는 일반적인 입력 변환을 설명하기 위한 내장 시스템이 있습니다. 자세한 내용은 tf.feature_column, 이 튜토리얼을 참조하세요.
원하는 도구(예
Step9: 다음은 모든 열을 묶는 간단한 함수입니다.
Step10: 이 함수를 데이터세트의 각 요소에 적용합니다.
Step11: 혼합 데이터 유형이 있는 경우, 해당 단순 숫자 필드를 분리할 수 있습니다. tf.feature_column API로 처리할 수 있지만, 약간의 오버헤드가 발생하며 실제로 필요하지 않으면 피해야 합니다. 혼합 데이터세트로 다시 전환합니다.
Step12: 따라서 숫자 특성 목록을 선택하고 단일 열로 묶는 보다 일반적인 전처리기를 정의합니다.
Step13: 데이터 정규화
연속 데이터는 항상 정규화되어야 합니다.
Step14: 이제 숫자 열을 만듭니다. tf.feature_columns.numeric_column API는 각 배치에서 실행될 normalizer_fn 인수를 허용합니다.
functools.partial를 사용하여 MEAN 및 STD를 노멀라이저 fn에 바인딩합니다.
Step15: 모델을 훈련할 때 이 특성 열을 포함하여 이 숫자 데이터 블록을 선택하고 중앙에 배치합니다.
Step16: 여기에 사용된 평균 기반 정규화를 위해서는 각 열의 평균을 미리 알아야 합니다.
범주형 데이터
CSV 데이터의 일부 열은 범주형 열입니다. 즉, 콘텐츠는 제한된 옵션 세트 중 하나여야 합니다.
tf.feature_column API를 사용하여 각 범주 열에 대해 tf.feature_column.indicator_column을 가진 모음을 작성합니다.
Step17: 이것은 나중에 모델을 빌드할 때 데이터 처리 입력의 일부가 됩니다.
결합된 전처리 레이어
두 개의 특성 열 모음을 추가하고 tf.keras.layers.DenseFeatures에 전달하여 두 입력 유형을 추출하고 전처리할 입력 레이어를 만듭니다.
Step18: 모델 빌드하기
preprocessing_layer를 사용하여 tf.keras.Sequential를 빌드합니다.
Step19: 훈련, 평가 및 예측하기
이제 모델을 인스턴스화하고 훈련할 수 있습니다.
Step20: 모델이 학습하면 test_data 세트에서 정확성을 확인할 수 있습니다.
Step21: tf.keras.Model.predict를 사용하여 배치 또는 배치 데이터세트에서 레이블을 유추합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import functools
import numpy as np
import tensorflow as tf
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
Explanation: CSV 데이터 로드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/csv"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
이 튜토리얼은 파일에서 tf.data.Dataset로 CSV 데이터를 로드하는 방법의 예를 제공합니다.
이 튜토리얼에서 사용된 데이터는 Titanic 승객 목록에서 가져온 것입니다. 이 모델은 연령, 성별, 티켓 등급 및 단독 여행 여부와 같은 특성을 기반으로 승객의 생존 가능성을 예측합니다.
설정
End of explanation
!head {train_file_path}
Explanation: 데이터 로드하기
시작하려면 CSV 파일의 상단을 보고 형식이 어떻게 지정되는지 봅니다.
End of explanation
LABEL_COLUMN = 'survived'
LABELS = [0, 1]
Explanation: pandas를 사용하여 로드하고 NumPy 배열을 TensorFlow에 전달할 수 있습니다. 큰 파일 세트로 확장해야 하거나 TensorFlow 및 tf.data와 통합되는 로더가 필요한 경우, tf.data.experimental.make_csv_dataset 함수를 사용합니다.
명시적으로 식별해야 하는 유일한 열은 모델에서 예측하려는 값을 가진 열입니다.
End of explanation
def get_dataset(file_path, **kwargs):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=5, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)
def show_batch(dataset):
for batch, label in dataset.take(1):
for key, value in batch.items():
print("{:20s}: {}".format(key,value.numpy()))
Explanation: 이제 파일에서 CSV 데이터를 읽고 데이터세트를 작성합니다.
(전체 설명서는 tf.data.experimental.make_csv_dataset를 참조하세요.)
End of explanation
show_batch(raw_train_data)
Explanation: 데이터세트의 각 항목은 배치이며 (많은 예제, 많은 레이블 )의 튜플로 표현됩니다. 예제의 데이터는 행 기반 텐서가 아닌 열 기반 텐서로 구성되며, 각 데이터는 배치 크기(이 경우 5)만큼 많은 요소가 있습니다.
직접 보는 것이 도움이 될 수 있습니다.
End of explanation
CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
temp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)
show_batch(temp_dataset)
Explanation: 보시다시피 CSV의 열 이름이 지정됩니다. 데이터세트 생성자가 이들 이름을 자동으로 선택합니다. 작업 중인 파일의 첫 번째 줄에 열 이름이 포함되어 있지 않은 경우, 열 이름을 문자열 목록으로 make_csv_dataset 함수의 column_names 인수로 전달합니다.
End of explanation
SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'class', 'deck', 'alone']
temp_dataset = get_dataset(train_file_path, select_columns=SELECT_COLUMNS)
show_batch(temp_dataset)
Explanation: 이 예제에서는 사용 가능한 모든 열을 사용합니다. 데이터세트에서 일부 열을 생략해야 하는 경우, 사용하려는 열의 목록만 작성하고 생성자의 (선택적) select_columns 인수로 전달합니다.
End of explanation
SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'parch', 'fare']
DEFAULTS = [0, 0.0, 0.0, 0.0, 0.0]
temp_dataset = get_dataset(train_file_path,
select_columns=SELECT_COLUMNS,
column_defaults = DEFAULTS)
show_batch(temp_dataset)
example_batch, labels_batch = next(iter(temp_dataset))
Explanation: 데이터 전처리
CSV 파일은 다양한 데이터 유형을 포함할 수 있습니다. 일반적으로 데이터를 모델에 공급하기 전에 혼합 유형에서 고정 길이 벡터로 변환하려고 합니다.
TensorFlow에는 일반적인 입력 변환을 설명하기 위한 내장 시스템이 있습니다. 자세한 내용은 tf.feature_column, 이 튜토리얼을 참조하세요.
원하는 도구(예: nltk 또는 sklearn)를 사용하여 데이터를 전처리하고 처리된 출력을 TensorFlow에 전달하면 됩니다.
모델 내에서 전처리를 수행할 때의 주요 이점은 모델을 내보낼 때 전처리가 포함된다는 것입니다. 이렇게 하면 원시 데이터를 모델로 직접 전달할 수 있습니다.
연속 데이터
데이터가 이미 적절한 숫자 형식인 경우, 데이터를 모델로 전달하기 전에 벡터로 묶을 수 있습니다.
End of explanation
def pack(features, label):
return tf.stack(list(features.values()), axis=-1), label
Explanation: 다음은 모든 열을 묶는 간단한 함수입니다.
End of explanation
packed_dataset = temp_dataset.map(pack)
for features, labels in packed_dataset.take(1):
print(features.numpy())
print()
print(labels.numpy())
Explanation: 이 함수를 데이터세트의 각 요소에 적용합니다.
End of explanation
show_batch(raw_train_data)
example_batch, labels_batch = next(iter(temp_dataset))
Explanation: 혼합 데이터 유형이 있는 경우, 해당 단순 숫자 필드를 분리할 수 있습니다. tf.feature_column API로 처리할 수 있지만, 약간의 오버헤드가 발생하며 실제로 필요하지 않으면 피해야 합니다. 혼합 데이터세트로 다시 전환합니다.
End of explanation
class PackNumericFeatures(object):
def __init__(self, names):
self.names = names
def __call__(self, features, labels):
numeric_features = [features.pop(name) for name in self.names]
numeric_features = [tf.cast(feat, tf.float32) for feat in numeric_features]
numeric_features = tf.stack(numeric_features, axis=-1)
features['numeric'] = numeric_features
return features, labels
NUMERIC_FEATURES = ['age','n_siblings_spouses','parch', 'fare']
packed_train_data = raw_train_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
packed_test_data = raw_test_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
show_batch(packed_train_data)
example_batch, labels_batch = next(iter(packed_train_data))
Explanation: 따라서 숫자 특성 목록을 선택하고 단일 열로 묶는 보다 일반적인 전처리기를 정의합니다.
End of explanation
import pandas as pd
desc = pd.read_csv(train_file_path)[NUMERIC_FEATURES].describe()
desc
MEAN = np.array(desc.T['mean'])
STD = np.array(desc.T['std'])
def normalize_numeric_data(data, mean, std):
# Center the data
return (data-mean)/std
Explanation: 데이터 정규화
연속 데이터는 항상 정규화되어야 합니다.
End of explanation
# See what you just created.
normalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)
numeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])
numeric_columns = [numeric_column]
numeric_column
Explanation: 이제 숫자 열을 만듭니다. tf.feature_columns.numeric_column API는 각 배치에서 실행될 normalizer_fn 인수를 허용합니다.
functools.partial를 사용하여 MEAN 및 STD를 노멀라이저 fn에 바인딩합니다.
End of explanation
example_batch['numeric']
numeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)
numeric_layer(example_batch).numpy()
Explanation: 모델을 훈련할 때 이 특성 열을 포함하여 이 숫자 데이터 블록을 선택하고 중앙에 배치합니다.
End of explanation
CATEGORIES = {
'sex': ['male', 'female'],
'class' : ['First', 'Second', 'Third'],
'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
'alone' : ['y', 'n']
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = tf.feature_column.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab)
categorical_columns.append(tf.feature_column.indicator_column(cat_col))
# See what you just created.
categorical_columns
categorical_layer = tf.keras.layers.DenseFeatures(categorical_columns)
print(categorical_layer(example_batch).numpy()[0])
Explanation: 여기에 사용된 평균 기반 정규화를 위해서는 각 열의 평균을 미리 알아야 합니다.
범주형 데이터
CSV 데이터의 일부 열은 범주형 열입니다. 즉, 콘텐츠는 제한된 옵션 세트 중 하나여야 합니다.
tf.feature_column API를 사용하여 각 범주 열에 대해 tf.feature_column.indicator_column을 가진 모음을 작성합니다.
End of explanation
preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numeric_columns)
print(preprocessing_layer(example_batch).numpy()[0])
Explanation: 이것은 나중에 모델을 빌드할 때 데이터 처리 입력의 일부가 됩니다.
결합된 전처리 레이어
두 개의 특성 열 모음을 추가하고 tf.keras.layers.DenseFeatures에 전달하여 두 입력 유형을 추출하고 전처리할 입력 레이어를 만듭니다.
End of explanation
model = tf.keras.Sequential([
preprocessing_layer,
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1),
])
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
Explanation: 모델 빌드하기
preprocessing_layer를 사용하여 tf.keras.Sequential를 빌드합니다.
End of explanation
train_data = packed_train_data.shuffle(500)
test_data = packed_test_data
model.fit(train_data, epochs=20)
Explanation: 훈련, 평가 및 예측하기
이제 모델을 인스턴스화하고 훈련할 수 있습니다.
End of explanation
test_loss, test_accuracy = model.evaluate(test_data)
print('\n\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy))
Explanation: 모델이 학습하면 test_data 세트에서 정확성을 확인할 수 있습니다.
End of explanation
predictions = model.predict(test_data)
# Show some results
for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):
prediction = tf.sigmoid(prediction).numpy()
print("Predicted survival: {:.2%}".format(prediction[0]),
" | Actual outcome: ",
("SURVIVED" if bool(survived) else "DIED"))
Explanation: tf.keras.Model.predict를 사용하여 배치 또는 배치 데이터세트에서 레이블을 유추합니다.
End of explanation |
15,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
Step1: Fun with Fourier Series
GOAL
Step2: Fourier Sine Series of $x(1-x)$
The previous series converges extremely slowly due to the discontinuities at the end-points and that $f(x)=1$ does not share the same boundary conditions as the eigenfunctions $\phi_n(x)=\sin(n\pi x)$ which are zero at the boundaries. For $C^2$ functions that share the same boundary conditions, however, Fourier Series can converge quite quickly. Here we will calculate the Fourier Sine series of the parabola $f(x) = x(1-x)$ on the interval $[0,1]$ which satisfies these conditions.
The Fourier coefficients for this function are
$$
a_n = 2\int_0^1 (x - x^2) sin(n\pi x) dx = \frac{8}{n^3\pi^3}
$$
for $n$ odd. These can be found relatively easily by successive integration by parts (Homework).
So the Fourier Sine series of $f$ is
$$
x(1-x) = \sum_{n-odd}^\infty \frac{8}{(n\pi)^3} \sin(n\pi x)
$$ | Python Code:
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
% matplotlib inline
Explanation: <table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
End of explanation
x = np.linspace(0,1,1000)
# small python function to define the partial sum for the truncated Fourier Series f_N
def f_N(x,N):
na = range(1,N+1,2)
f_N = np.zeros(x.shape)
for n in na:
f_N += 4./(n*np.pi)*np.sin(n*np.pi*x)
return f_N
# And make a figure showing f_N for increasing values of N
Na = [ 1, 3, 5, 11, 101]
plt.figure()
for N in Na:
plt.plot(x,f_N(x,N),label='N={}'.format(N))
plt.plot(x,np.ones(x.shape),'k',label='$f(x)=1$')
plt.xlabel('x')
plt.legend(loc='best')
plt.grid()
plt.show()
Explanation: Fun with Fourier Series
GOAL: visualize some basic behavior of Fourier Series
Fourier Sine Series of 1
Here we will calculate the Fourier Sine series on the interval $[0,1]$ as
$$
1 = \sum_{n-odd}^\infty \frac{4}{n\pi} \sin(n\pi x)
$$
End of explanation
x = np.linspace(0,1,1000)
# small python function to define the partial sum for the truncated Fourier Series f_N
def f_N(x,N):
na = range(1,N+1,2)
f_N = np.zeros(x.shape)
for n in na:
f_N += 8./(n*np.pi)**3*np.sin(n*np.pi*x)
return f_N
# And make a figure showing f_N for increasing values of N
Na = [ 1, 3, 5, 11, 101]
plt.figure()
for N in Na:
plt.plot(x,f_N(x,N),label='N={}'.format(N))
plt.plot(x,x*(1-x),'k',label='$f(x)=x(1-x)$')
plt.xlabel('x')
plt.legend(loc='best')
plt.grid()
plt.show()
Explanation: Fourier Sine Series of $x(1-x)$
The previous series converges extremely slowly due to the discontinuities at the end-points and that $f(x)=1$ does not share the same boundary conditions as the eigenfunctions $\phi_n(x)=\sin(n\pi x)$ which are zero at the boundaries. For $C^2$ functions that share the same boundary conditions, however, Fourier Series can converge quite quickly. Here we will calculate the Fourier Sine series of the parabola $f(x) = x(1-x)$ on the interval $[0,1]$ which satisfies these conditions.
The Fourier coefficients for this function are
$$
a_n = 2\int_0^1 (x - x^2) sin(n\pi x) dx = \frac{8}{n^3\pi^3}
$$
for $n$ odd. These can be found relatively easily by successive integration by parts (Homework).
So the Fourier Sine series of $f$ is
$$
x(1-x) = \sum_{n-odd}^\infty \frac{8}{(n\pi)^3} \sin(n\pi x)
$$
End of explanation |
15,506 | Given the following text description, write Python code to implement the functionality described.
Description:
Check if a matrix contains a square submatrix with 0 as boundary element
Function checks if square with all 0 's in boundary exists in the matrix ; r1 is the top row , c1 is the left col r2 is the bottom row , c2 is the right ; Function checks if the boundary of the square consists of 0 's ; Driver Code
| Python Code:
def squareOfZeroes() :
global matrix , cache
lastIdx = len(matrix ) - 1
return hasSquareOfZeroes(0 , 0 , lastIdx , lastIdx )
def hasSquareOfZeroes(r1 , c1 , r2 , c2 ) :
global matrix , cache
if(r1 >= r2 or c1 >= c2 ) :
return False
key =(str(r1 ) + ' - ' + str(c1 ) + ' - ' + str(r2 ) + ' - ' + str(c2 ) )
if(key in cache ) :
return cache[key ]
cache[key ] =(isSquareOfZeroes(r1 , c1 , r2 , c2 ) or hasSquareOfZeroes(r1 + 1 , c1 + 1 , r2 - 1 , c2 - 1 ) )
cache[key ] =(cache[key ] or hasSquareOfZeroes(r1 , c1 + 1 , r2 - 1 , c2 ) or hasSquareOfZeroes(r1 + 1 , c1 , r2 , c2 - 1 ) )
cache[key ] =(cache[key ] or hasSquareOfZeroes(r1 + 1 , c1 + 1 , r2 , c2 ) or hasSquareOfZeroes(r1 , c1 , r2 - 1 , c2 - 1 ) )
return cache[key ]
def isSquareOfZeroes(r1 , c1 , r2 , c2 ) :
global matrix
for row in range(r1 , r2 + 1 ) :
if(matrix[row ][c1 ] != 0 or matrix[row ][c2 ] != 0 ) :
return False
for col in range(c1 , c2 + 1 ) :
if(matrix[r1 ][col ] != 0 or matrix[r2 ][col ] != 0 ) :
return False
return True
if __name__== ' __main __' :
cache = { }
matrix =[[ 1 , 1 , 1 , 0 , 1 , 0 ] ,[0 , 0 , 0 , 0 , 0 , 1 ] ,[0 , 1 , 1 , 1 , 0 , 1 ] ,[0 , 0 , 0 , 1 , 0 , 1 ] ,[0 , 1 , 1 , 1 , 0 , 1 ] ,[0 , 0 , 0 , 0 , 0 , 1 ] ]
ans = squareOfZeroes()
if(ans == 1 ) :
print("True ")
else :
print("False ")
|
15,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source localization with MNE, dSPM, sLORETA, and eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
minimum-norm inverse method on evoked/raw/epochs data.
Step1: Process MEG data
Step2: Compute regularized noise covariance
For more details see tut-compute-covariance.
Step3: Compute the evoked response
Let's just use the MEG channels for simplicity.
Step4: It's also a good idea to look at whitened data
Step5: Inverse modeling
Step6: Next, we make an MEG inverse operator.
Step7: Compute inverse solution
We can use this to compute the inverse solution and obtain source time
courses
Step8: Visualization
We can look at different dipole activations
Step9: Examine the original data and the residual after fitting
Step10: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
Explanation: Source localization with MNE, dSPM, sLORETA, and eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
minimum-norm inverse method on evoked/raw/epochs data.
End of explanation
data_path = sample.data_path()
raw_fname = data_path / 'MEG' / 'sample' / 'sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname) # already has an average reference
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('meg', 'eog'), baseline=baseline, reject=reject)
Explanation: Process MEG data
End of explanation
noise_cov = mne.compute_covariance(
epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)
fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
Explanation: Compute regularized noise covariance
For more details see tut-compute-covariance.
End of explanation
evoked = epochs.average().pick('meg')
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',
time_unit='s')
Explanation: Compute the evoked response
Let's just use the MEG channels for simplicity.
End of explanation
evoked.plot_white(noise_cov, time_unit='s')
del epochs, raw # to save memory
Explanation: It's also a good idea to look at whitened data:
End of explanation
fname_fwd = data_path / 'MEG' / 'sample' / 'sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
Explanation: Inverse modeling: MNE/dSPM on evoked and raw data
Here we first read the forward solution. You will likely need to compute
one for your own data -- see tut-forward for information on how
to do it.
End of explanation
inverse_operator = make_inverse_operator(
evoked.info, fwd, noise_cov, loose=0.2, depth=0.8)
del fwd
# You can write it to disk with::
#
# >>> from mne.minimum_norm import write_inverse_operator
# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',
# inverse_operator)
Explanation: Next, we make an MEG inverse operator.
End of explanation
method = "dSPM"
snr = 3.
lambda2 = 1. / snr ** 2
stc, residual = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None,
return_residual=True, verbose=True)
Explanation: Compute inverse solution
We can use this to compute the inverse solution and obtain source time
courses:
End of explanation
fig, ax = plt.subplots()
ax.plot(1e3 * stc.times, stc.data[::100, :].T)
ax.set(xlabel='time (ms)', ylabel='%s value' % method)
Explanation: Visualization
We can look at different dipole activations:
End of explanation
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
for text in list(ax.texts):
text.remove()
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
Explanation: Examine the original data and the residual after fitting:
End of explanation
vertno_max, time_max = stc.get_peak(hemi='rh')
subjects_dir = data_path / 'subjects'
surfer_kwargs = dict(
hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=10)
brain = stc.plot(**surfer_kwargs)
brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',
scale_factor=0.6, alpha=0.5)
brain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',
font_size=14)
# The documentation website's movie is generated with:
# brain.save_movie(..., tmin=0.05, tmax=0.15, interpolation='linear',
# time_dilation=20, framerate=10, time_viewer=True)
Explanation: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
End of explanation |
15,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
15,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intorduction to Pandas - Pan(el)-da(ta)-s
Laszlo Tetenyi
Step1: Survey of Consumer Finances (SCF) 2013
Load and explore data from the SCF website - note that this data cannot be loaded by 2007- Excel due to its size
Step2: Lets have a quick look at the table
Step3: We can select particular rows of the table using standard Python array slicing notation
Step4: table is a DataFrame - a multi-dimensional equivalent of Series (another Pandas object), as it has multiple columns, so you can think of it as a matrix, where columns can be accessed by their 'names'. In fact, many operations can be performed on them (coming from numpy)
Step5: But they know more than that - they have several built-in statistics
Step6: As an example try to access normalized income and net-worth variables.
Step7: There are way too many variables in there - try to search for the proper column names
Step8: Get the "mean" (non -weighted) income and minimal net worth
Step9: That is, there is one person with a net worth of -227 million \$ ! Suppose we do not want our analysis to depend on these extremely low values and we would like to trim our dataframe. As a first step, create a new dataframe that only contains the variables of interest (and the id of each observation)
Step10: Rename the columns
Step11: Try to get a general picture of what would be a "good" trimming value of net worth by plotting an estimated kernel density
Step12: Let's see how many observations we eliminate if we exclude everyone below -1 million \$
Step13: But how many households are in this category?
Step14: Pivoting
Note that the data has a nice panel structure which we so far did not exploit - each household has multiple observations. Lets use pivoting so that our dataframe reflects that. First get the index of our dataframe
Step15: We simply have the index of each observation. As a first step, replace these indeces by the household identifier and group income levels in each observation
Step16: If instead we are interested in both income and net worth grouped by observations then
Step17: Use stacking to transform the data into a panel structure we are familiar with (and unstacking to go back to cross-section)
Step18: Using the panel structure it is even easier to see the number of households that had fewer
than -1 million \$ net worth
Step19: Pandas even have their own data-structure for panel data - for it, we need to create dataframes as inputs
Step20: but this is not as useful - unfortunately the panel part of the package has been neglected. Very few functions are available. Now as a last exercise, save our dataFrame to a csv file | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import requests, zipfile, io # So that we can download and unzip files
Explanation: Intorduction to Pandas - Pan(el)-da(ta)-s
Laszlo Tetenyi
End of explanation
r = requests.get('http://www.federalreserve.gov/econresdata/scf/files/scfp2013excel.zip')
z = zipfile.ZipFile(io.BytesIO(r.content))
f = z.open('SCFP2013.xlsx')
table = pd.read_excel(f, sheetname='SCFP2013')
Explanation: Survey of Consumer Finances (SCF) 2013
Load and explore data from the SCF website - note that this data cannot be loaded by 2007- Excel due to its size
End of explanation
table.head()
Explanation: Lets have a quick look at the table
End of explanation
table[0:5]
Explanation: We can select particular rows of the table using standard Python array slicing notation
End of explanation
table.max().max()
Explanation: table is a DataFrame - a multi-dimensional equivalent of Series (another Pandas object), as it has multiple columns, so you can think of it as a matrix, where columns can be accessed by their 'names'. In fact, many operations can be performed on them (coming from numpy):
End of explanation
table.describe()
Explanation: But they know more than that - they have several built-in statistics :
End of explanation
table.dtypes[0:5]
table.dtypes.shape
Explanation: As an example try to access normalized income and net-worth variables.
End of explanation
[col for col in table.columns if 'NETWOR' in col]
[col for col in table.columns if 'INC' in col]
income = table['NORMINC']
net_worth = table['NETWORTH']
Explanation: There are way too many variables in there - try to search for the proper column names
End of explanation
income.mean()
net_worth.min()
Explanation: Get the "mean" (non -weighted) income and minimal net worth
End of explanation
keep = ['YY1','Y1', 'NORMINC', 'NETWORTH']
data = table[keep]
data.head()
Explanation: That is, there is one person with a net worth of -227 million \$ ! Suppose we do not want our analysis to depend on these extremely low values and we would like to trim our dataframe. As a first step, create a new dataframe that only contains the variables of interest (and the id of each observation)
End of explanation
data.columns ='Household', 'Observation' , 'Income', 'Net Worth'
data.head()
Explanation: Rename the columns:
End of explanation
data['Net Worth'].plot(kind='density')
plt.show()
Explanation: Try to get a general picture of what would be a "good" trimming value of net worth by plotting an estimated kernel density
End of explanation
data_trimmed = data[data['Net Worth'] > -1000000]
data.shape[0] - data_trimmed.shape[0]
Explanation: Let's see how many observations we eliminate if we exclude everyone below -1 million \$
End of explanation
data[data['Net Worth'] < -1000000]
data_trimmed['Net Worth'].plot(kind='density')
plt.show()
Explanation: But how many households are in this category?
End of explanation
data.index
Explanation: Pivoting
Note that the data has a nice panel structure which we so far did not exploit - each household has multiple observations. Lets use pivoting so that our dataframe reflects that. First get the index of our dataframe:
End of explanation
new_observations = data.loc[:,'Observation'] - 10 * data.loc[:,'Household']
data.loc[:,'Observation'] = new_observations
data[0:10]
# Normally, you should not do this - instead use assign
# Reload the data
data = table[keep]
data.columns ='Household', 'Observation' , 'Income', 'Net Worth'
data = data.assign(Observations = (data['Observation'] - 10.0 * data['Household']).astype(int))
del data['Observation'] # delete the old column
data = data.rename(columns = {'Observations':'Observation'}) # rename the column
data = data[['Household', 'Observation' , 'Income', 'Net Worth']] # reinsert the column
data.head()
p = data.pivot(index = 'Household' , columns = 'Observation' , values = 'Income' )
p.head()
Explanation: We simply have the index of each observation. As a first step, replace these indeces by the household identifier and group income levels in each observation:
End of explanation
p = data.pivot(index='Household', columns='Observation')
p.head()
Explanation: If instead we are interested in both income and net worth grouped by observations then:
End of explanation
panel_data = p.stack()
panel_data.head()
Explanation: Use stacking to transform the data into a panel structure we are familiar with (and unstacking to go back to cross-section):
End of explanation
panel_data[panel_data['Net Worth'] < -1000000]
Explanation: Using the panel structure it is even easier to see the number of households that had fewer
than -1 million \$ net worth
End of explanation
p = data.pivot(index='Observation', columns='Household')
pdata = {'Observation 1' : pd.DataFrame(p.ix[1,:]),
'Observation 2' : pd.DataFrame(p.ix[2,:]),
'Observation 3' : pd.DataFrame(p.ix[3,:]),
'Observation 4' : pd.DataFrame(p.ix[4,:]),
'Observation 5' : pd.DataFrame(p.ix[5,:])}
pdata = pd.Panel(pdata)
pdata
Explanation: Pandas even have their own data-structure for panel data - for it, we need to create dataframes as inputs
End of explanation
data.to_excel('SCF_2013_inc_netw.xlsx')
Explanation: but this is not as useful - unfortunately the panel part of the package has been neglected. Very few functions are available. Now as a last exercise, save our dataFrame to a csv file
End of explanation |
15,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding the closed-form solution for transition probabilities
Here we describe the process of finding all turning angle sequences and their accompanying volumes using a 2D example.
Finite directions represented in an ODF
Currently MITTENS requires that ODFs have values estimated for a set of pre-defined directions $\Theta$. Here we will use the "odf4" set of directions from DSI Studio. Here we select only directions that are in the axial plane, resulting in a set of directions in 2D
Step1: In practice there will need to be diffusion magnitudes for each direction, $\bar{p}_u(\theta)$. Some packages such as MRTRIX do not evaluate ODFs on a fixed set of directions and instead store SH coefficients. These can be evaluated on a set of directions and used in MITTENS, but for this example we don't actually need these values. Instead, we leave the magnitudes out of this exercise and keep $\bar{p}_u(\theta)$ as a variable that will be filled in later with empirical values.
Turning angle sequences
In order to calculate analytic transition probability from voxel $u$ to voxel $v$ we need to find the finite set of turning angle sequences that can make a final hop into $v$ from $u$ while obeying the geometric constraints. Then for each turning angle sequence $\sigma \in \mathit{\Sigma}$ we have to find the accompanying volume $\mathcal{V}(\sigma,v)$ where $\sigma$ could start such that it ends in $v$. Here we plot the 2D voxel grid with voxel $u$ in the center and potential voxel $v$'s numbered around it
Step2: Suppose we want to find the possible ways to get from center voxel $u$ to the right anterior neighbor $v$ (ie neighbor 7). We first need to define the geometric constraints.
We make the following assumptions
Voxel edges are all fixed length. Here we use 1
Step size is a fixed length. Here we use 0.5
The maximum turning angle is 35 degrees
Now we can begin finding the turning angle sequences that can get from $u$ to $v$. This problem is solved recursively. Each step in the recursion requires a potential source rectangle, a desired target rectangle and step vector. The step vector is of our chosen fixed length in a direction available in $\Theta$. At each recursion it is determed whether a rectangle exists in the source rectangle where a step can be taken along the step vector such that it ends in the target rectangle. Below are some plotting functions and the function that will be called recursively (get_area)
Step3: One-hop turning angle sequences
We start with finding all 1-hop turning angle sequences from $u$ to $v$. All directions in $\Theta$ need to be tested. We will start with the sixth direction in $\Theta$, $\Theta[5]$ times the step size of 0.5 to create our first step vector. The first call to get_area will have the boundaries of center voxel $u$ as the source rectangle. The boundaries of the target voxel $v$ are target rectangle.
Intuitively this step is solving the problem "If I seed somewhere in voxel $u$ and know I'm going to take direction $\Theta[5]$, where can I land in voxel $u$ so that I'm going to end in voxel $v$?"
Now we can call get_area after specifying the step size, turning angle max, and an initial direction
Step4: We have generated a turning angle sequence $\sigma_1=(\Theta[5])$ and calculated the volume from which this turning angle sequence could begin such that it ends in $v$ (it's stored in the area1 variable, plotted as a red rectangle).
This is the first equivalence class that we've found. In probabilistic simulations, the probability of landing exactly in the red square and randomly selecting $\Theta[5]$ is $\bar{p}_u(\Theta[5])\mathcal{V}(\sigma_1,v)$. Or, more concretely, it would be area1*odf_magnitudes[5] if the ODF magnitudes were stored in an array called odf_magnitudes.
Recall that the analytic transition probability from $u$ to $v$ is
$$\text{P}(u \rightarrow v) = \sum_{\sigma \in \mathit{\Sigma}}\text{P}_u(\sigma) \mathcal{V}( \sigma, v ) $$
You can directly calculate the probability of this one-hop turning angle sequence by
python
prob_sigma1 = area1 * odf_magnitudes[5]
Two-hop turning angle sequences
Now that we have a one-hop turning angle sequence and its corresponding $\mathcal{V}$, we can loop over other directions in $\Theta$ to see which could be taken into the red area. For example, suppose the loop has made it to $\Theta[1]$. We again call get_area, but this time the target rectangle is the red rectangle from the previous step.
Here we plot the two directions in a new turning angle sequence $\sigma_2=(\Theta[1],\Theta[5])$ and $\mathcal{V}(\sigma_2, v)$ is drawn in blue
Step5: Thinking back to probabilistic simulations, we need to calculate the probability of landing in the blue rectangle and randomly selecting $\Theta[1]$ and then randomly selecting $\Theta[5]$ from directions compatible with direction $\Theta[1]$.
The probability of landing in the blue rectangle is $\mathcal{V}(\sigma_2,v)$, which is stored in the area2 variable. There is no constraint on the first step, which is randomly selected from all the directions in $\Theta$. The probability is then simply $\bar{p}_u(\Theta[1])$, or odf_magnitudes[1]. The next step is not as simple because the maximum turning angle parameter comes into play. The next step isn't randomly selected from all the directions in $\Theta$, but only those who create an angle less than the maximum turning angle with the previous step. Suppose only directions 0-9 are compatible with $\Theta[5]$. The conditional probability $\text{P}(\Theta[5] \mid \Theta[1])$, or odf_magnitudes[5]/odf_magnitudes[
Step6: Here the area $\mathcal{V}(\sigma_3,v)$ is drawn as a green rectangle and stored in variable area3. Again thinking back to probabilistic simulations, we need to calculate the probability of landing in the green rectangle and randomly selecting $\Theta[7]$ and then randomly selecting $\Theta[1]$ from directions compatible with $\Theta[7]$ and then randomly selecting $\Theta[5]$ from directions compatible with $\Theta[1]$.
The probability of landing in the green rectangle is $\mathcal{V}(\sigma_3,v)$, which is stored in the area3 variable. There is no constraint on the first step, which is randomly selected from all the directions in $\Theta$, so the probability is $\bar{p}_u(\Theta[7])$, or odf_magnitudes[7]. If directions 3-12 are compatible with $\Theta[7]$, the conditional probability of the next step is $\text{P}(\Theta[1] \mid \Theta[7])$, or odf_magnitudes[1]/odf_magnitudes[3
Step7: Right neighbor
Step8: Right Anterior Neighbor
Step9: Anterior Neighbor
Step10: Left Anterior Neighbor
Step11: Left Neighbor
Step12: Left Posterior Neighbor
Step13: Posterior Neighbor
Step14: Right Posterior Neighbor | Python Code:
%pylab inline
from mittens.utils import *
odf_vertices, odf_faces = get_dsi_studio_ODF_geometry("odf4")
# select only the vertices in the x,y plane
ok_vertices = np.abs(odf_vertices[:,2]) < 0.01
odf_vertices = odf_vertices[ok_vertices]
def draw_vertex(ax, vertex, color="r"):
ax.plot((0,vertex[0]), (0,vertex[1]), color=color,linewidth=3,alpha=0.6)
def draw_vertices(ax,selected_vertex=None,selected_vertex_color="r"):
# Draws ODF Vertices on an axis object
center = np.zeros(ok_vertices.sum())
ax.quiver(center, center, odf_vertices[:,0], odf_vertices[:,1],
scale=1,angles="xy", scale_units="xy")
ax.set_xlim(-1,1)
ax.set_ylim(-1,1)
ax.set_xticks([])
ax.set_yticks([])
if selected_vertex is not None:
draw_vertex(ax,selected_vertex,color=selected_vertex_color)
# Plot them to make sure
fig,ax = plt.subplots(figsize=(4,4))
ax.set_title("Axial ODF Directions")
draw_vertices(ax);
Explanation: Finding the closed-form solution for transition probabilities
Here we describe the process of finding all turning angle sequences and their accompanying volumes using a 2D example.
Finite directions represented in an ODF
Currently MITTENS requires that ODFs have values estimated for a set of pre-defined directions $\Theta$. Here we will use the "odf4" set of directions from DSI Studio. Here we select only directions that are in the axial plane, resulting in a set of directions in 2D:
End of explanation
# Plot the voxel grid
def draw_grid(ax):
for border in (0,1):
ax.axhline(border,color="k")
ax.axvline(border,color="k")
ax.set_xlim(-1,2)
ax.set_ylim(-1,2)
ax.set_xticks([])
ax.set_yticks([])
nfig,nax = plt.subplots(figsize=(4,4))
draw_grid(nax)
nax.set_title("Example Voxel Grid")
neighbors_coords = []
for xshift in (-1,0,1):
for yshift in (-1,0,1):
if not xshift == yshift == 0:
neighbors_coords.append((xshift,yshift))
for neighbornum,neighbor in enumerate(neighbors_coords):
nax.text(neighbor[0] + 0.5, neighbor[1] + 0.5, str(neighbornum))
;
Explanation: In practice there will need to be diffusion magnitudes for each direction, $\bar{p}_u(\theta)$. Some packages such as MRTRIX do not evaluate ODFs on a fixed set of directions and instead store SH coefficients. These can be evaluated on a set of directions and used in MITTENS, but for this example we don't actually need these values. Instead, we leave the magnitudes out of this exercise and keep $\bar{p}_u(\theta)$ as a variable that will be filled in later with empirical values.
Turning angle sequences
In order to calculate analytic transition probability from voxel $u$ to voxel $v$ we need to find the finite set of turning angle sequences that can make a final hop into $v$ from $u$ while obeying the geometric constraints. Then for each turning angle sequence $\sigma \in \mathit{\Sigma}$ we have to find the accompanying volume $\mathcal{V}(\sigma,v)$ where $\sigma$ could start such that it ends in $v$. Here we plot the 2D voxel grid with voxel $u$ in the center and potential voxel $v$'s numbered around it:
End of explanation
def overlap(min1, max1, min2, max2):
# Compares bounding coordinates of two rectangles
return max(0, min(max1, max2) - max(min1, min2)), max(min1,min2), min(max1,max2)
def get_area(source_low_left, source_top_right, target_low_left,
target_top_right, direction):
'''
2D computation of the area in source rectangle from which a step of size STEPSIZE in
direction will land in the target rectangle
Parameters:
----------
source_low_left: tuple of lower, left coordinates of source rectangle
source_top_right: tuple of upper, right coordinates of source rectangle
target_low_left: tuple of lower, left coordinates of target rectangle
target_top_right: tuple of upper, right coordinates of target rectangle
direction:tuple specifying the direction considered
Returns: the area, (lower, left)- coordinates, (upper, right)-coordinates of the
area in the source rectangle from which a step of size STEPSIZE in the given direction
will land in the target rectangle
'''
x_min = target_low_left[0] - STEPSIZE*direction[0]
x_max = target_top_right[0] - STEPSIZE*direction[0]
x_delta,x_start,x_end = overlap(source_low_left[0],source_top_right[0],x_min,x_max)
y_min = target_low_left[1] - STEPSIZE*direction[1]
y_max = target_top_right[1] - STEPSIZE*direction[1]
y_delta,y_start,y_end = overlap(source_low_left[1],source_top_right[1],y_min,y_max)
return x_delta*y_delta, [x_start, y_start], [x_end,y_end]
# Some functions for plotting
from matplotlib import patches
def draw_square(lower_left, upper_right, ax, color, fill=True):
ax.add_patch(
patches.Rectangle(lower_left, upper_right[0]-lower_left[0],
upper_right[1] - lower_left[1], alpha=1, color=color, fill=fill))
Explanation: Suppose we want to find the possible ways to get from center voxel $u$ to the right anterior neighbor $v$ (ie neighbor 7). We first need to define the geometric constraints.
We make the following assumptions
Voxel edges are all fixed length. Here we use 1
Step size is a fixed length. Here we use 0.5
The maximum turning angle is 35 degrees
Now we can begin finding the turning angle sequences that can get from $u$ to $v$. This problem is solved recursively. Each step in the recursion requires a potential source rectangle, a desired target rectangle and step vector. The step vector is of our chosen fixed length in a direction available in $\Theta$. At each recursion it is determed whether a rectangle exists in the source rectangle where a step can be taken along the step vector such that it ends in the target rectangle. Below are some plotting functions and the function that will be called recursively (get_area):
End of explanation
STEPSIZE=0.5 # choose the fixed step size
theta1 = odf_vertices[5] # Choose the 6th direction in Theta as an example
angle_max = 35 # in degrees
# For plotting
fig,(dax,vax) = plt.subplots(ncols=2,subplot_kw = {"aspect":"equal"})
draw_vertices(dax,theta1)
draw_grid(vax)
# Specift rectangle boundaries
v11 = (0,0) # Lower left of (0,0)
v12 = (1,1) # Upper right of (0,0)
v21 = (1,1) # Lower left of (1,1)
v22 = (2,2) # Upper right of (1,1)
area1,sq_bot1,sq_top1 = get_area(v11,v12,v21,v22,theta1)
draw_square(sq_bot1, sq_top1, vax, "r")
Explanation: One-hop turning angle sequences
We start with finding all 1-hop turning angle sequences from $u$ to $v$. All directions in $\Theta$ need to be tested. We will start with the sixth direction in $\Theta$, $\Theta[5]$ times the step size of 0.5 to create our first step vector. The first call to get_area will have the boundaries of center voxel $u$ as the source rectangle. The boundaries of the target voxel $v$ are target rectangle.
Intuitively this step is solving the problem "If I seed somewhere in voxel $u$ and know I'm going to take direction $\Theta[5]$, where can I land in voxel $u$ so that I'm going to end in voxel $v$?"
Now we can call get_area after specifying the step size, turning angle max, and an initial direction
End of explanation
# try a theta2
theta2 = odf_vertices[1]
fig,(dax1,vax1) = plt.subplots(ncols=2,subplot_kw = {"aspect":"equal"})
draw_vertices(dax1, theta1,selected_vertex_color="r")
draw_vertex(dax1,theta2,color="b")
draw_grid(vax1)
draw_square(sq_bot1,sq_top1, vax1, "r")
area2,sq_bot2,sq_top2 = get_area(v11,v12,sq_bot1,sq_top1,theta2)
draw_square(sq_bot2,sq_top2, vax1, "b")
Explanation: We have generated a turning angle sequence $\sigma_1=(\Theta[5])$ and calculated the volume from which this turning angle sequence could begin such that it ends in $v$ (it's stored in the area1 variable, plotted as a red rectangle).
This is the first equivalence class that we've found. In probabilistic simulations, the probability of landing exactly in the red square and randomly selecting $\Theta[5]$ is $\bar{p}_u(\Theta[5])\mathcal{V}(\sigma_1,v)$. Or, more concretely, it would be area1*odf_magnitudes[5] if the ODF magnitudes were stored in an array called odf_magnitudes.
Recall that the analytic transition probability from $u$ to $v$ is
$$\text{P}(u \rightarrow v) = \sum_{\sigma \in \mathit{\Sigma}}\text{P}_u(\sigma) \mathcal{V}( \sigma, v ) $$
You can directly calculate the probability of this one-hop turning angle sequence by
python
prob_sigma1 = area1 * odf_magnitudes[5]
Two-hop turning angle sequences
Now that we have a one-hop turning angle sequence and its corresponding $\mathcal{V}$, we can loop over other directions in $\Theta$ to see which could be taken into the red area. For example, suppose the loop has made it to $\Theta[1]$. We again call get_area, but this time the target rectangle is the red rectangle from the previous step.
Here we plot the two directions in a new turning angle sequence $\sigma_2=(\Theta[1],\Theta[5])$ and $\mathcal{V}(\sigma_2, v)$ is drawn in blue:
End of explanation
# try a theta3
theta3 = odf_vertices[7]
fig,(dax2,vax2) = plt.subplots(ncols=2,subplot_kw = {"aspect":"equal"})
draw_vertices(dax2, theta1,selected_vertex_color="r")
draw_vertex(dax2,theta2,color="b")
draw_vertex(dax2,theta3,color="g")
draw_grid(vax2)
draw_square(sq_bot1, sq_top1, vax2, "r")
draw_square(sq_bot2, sq_top2, vax2, "b")
area3, sq_bot3,sq_top3 = get_area(v11,v12,sq_bot2,sq_top2,theta3)
draw_square(sq_bot3, sq_top3, vax2, "g")
Explanation: Thinking back to probabilistic simulations, we need to calculate the probability of landing in the blue rectangle and randomly selecting $\Theta[1]$ and then randomly selecting $\Theta[5]$ from directions compatible with direction $\Theta[1]$.
The probability of landing in the blue rectangle is $\mathcal{V}(\sigma_2,v)$, which is stored in the area2 variable. There is no constraint on the first step, which is randomly selected from all the directions in $\Theta$. The probability is then simply $\bar{p}_u(\Theta[1])$, or odf_magnitudes[1]. The next step is not as simple because the maximum turning angle parameter comes into play. The next step isn't randomly selected from all the directions in $\Theta$, but only those who create an angle less than the maximum turning angle with the previous step. Suppose only directions 0-9 are compatible with $\Theta[5]$. The conditional probability $\text{P}(\Theta[5] \mid \Theta[1])$, or odf_magnitudes[5]/odf_magnitudes[:10].sum(). You can directly calculate the probability of this two-hop turning angle sequence by
python
prob_sigma2 = area2 * odf_magnitudes[1] * odf_magnitudes[5]/odf_magnitudes[:10].sum()
Three-hop turning angle sequences
The process again proceeds to loop over all directions in $\Theta$ to see which could end in the blue rectangle while still starting in the center voxel $u$ (while forming an allowable angle with the previous step). Suppose the loop has made it to direction $\Theta[7]$. The call to get_area will now use the blue rectangle as the target. This three-hop turning angle sequence is $\sigma_3=\left(\Theta[7],\Theta[1],\Theta[5]\right)$
End of explanation
from mittens.utils import angle_between, pairwise_distances
import numpy as np
angle_max = 35
# Pre-calculate compatible angles
compatible_angles = pairwise_distances(odf_vertices,metric=angle_between) < angle_max
compatible_vertex_indices = np.array([np.flatnonzero(r) for r in compatible_angles])
def draw_square(lower_left, upper_right, ax, color):
ax.add_patch(
patches.Rectangle(lower_left, upper_right[0]-lower_left[0],
upper_right[1] - lower_left[1], alpha=0.1, fc=color,
fill=True))
ax.add_patch(
patches.Rectangle(lower_left, upper_right[0]-lower_left[0],
upper_right[1] - lower_left[1], alpha=1,lw=2,ec=color,
fill=False))
def solve_for_neighbor(neighbor_min, neighbor_max):
center_min, center_max = [0,0], [1,1]
fig, axc = plt.subplots(subplot_kw = {"aspect":"equal"}, figsize=(6,6))
# Which vertices exit into the target?
exits = np.array([
get_area(center_min, center_max, neighbor_min, neighbor_max, v)[0] > 0 \
for v in odf_vertices ])
for t1_num in np.flatnonzero(exits):
theta1 = odf_vertices[t1_num]
area1, sq_bot1, sq_top1 = get_area(center_min,center_max,
neighbor_min, neighbor_max, theta1)
draw_square(sq_bot1, sq_top1, axc, "r")
possible_theta2s = compatible_vertex_indices[t1_num]
for t2_num in possible_theta2s:
theta2 = odf_vertices[t2_num]
area2, sq_bot2, sq_top2 = get_area(center_min,center_max,
sq_bot1,sq_top1, theta2)
if area2 == 0:
continue
draw_square(sq_bot2, sq_top2, axc, "b")
possible_theta3s = compatible_vertex_indices[t2_num]
for t3_num in possible_theta3s:
theta3 = odf_vertices[t3_num]
area3, sq_bot3, sq_top3 = get_area(center_min,center_max,
sq_bot2,sq_top2, theta3)
if area3 == 0:
continue
draw_square(sq_bot3, sq_top3, axc, "g")
draw_grid(axc)
axc.set_xlim(-0.1,1.1)
axc.set_ylim(-0.1,1.1)
Explanation: Here the area $\mathcal{V}(\sigma_3,v)$ is drawn as a green rectangle and stored in variable area3. Again thinking back to probabilistic simulations, we need to calculate the probability of landing in the green rectangle and randomly selecting $\Theta[7]$ and then randomly selecting $\Theta[1]$ from directions compatible with $\Theta[7]$ and then randomly selecting $\Theta[5]$ from directions compatible with $\Theta[1]$.
The probability of landing in the green rectangle is $\mathcal{V}(\sigma_3,v)$, which is stored in the area3 variable. There is no constraint on the first step, which is randomly selected from all the directions in $\Theta$, so the probability is $\bar{p}_u(\Theta[7])$, or odf_magnitudes[7]. If directions 3-12 are compatible with $\Theta[7]$, the conditional probability of the next step is $\text{P}(\Theta[1] \mid \Theta[7])$, or odf_magnitudes[1]/odf_magnitudes[3:13].sum(). Finally the probability of the final step is $\text{P}(\Theta[5] \mid \Theta[1])$, or odf_magnitudes[5]/odf_magnitudes[:10].sum(). You can directly calculate the probability of this three-hop turning angle sequence by
python
prob_sigma3 = area3 * odf_magnitudes[7] * odf_magnitudes[1]/odf_magnitudes[3:13].sum() \
* odf_magnitudes[5]/odf_magnitudes[:10].sum()
N-hop turning angle sequences
This process continues until there are no more compatible prior hops in center voxel $u$. Once all possibilities are exhausted the entire sum is written as a Fortran function that takes the odf_magnitudes array is its input. This Fortran code is compiled, a numpy extension is built with f2py, then the function is called from inside the mittens.MITTENS class.
A complete example
The code below is not the code used within MITTENS, but is useful to show how the algorithm would operate in 2D on up to three-step turning angle sequences. No probabilities are actually calculated here. One-hop areas are outlined in red, two-hop areas are blue and three-hop areas are green.
End of explanation
solve_for_neighbor([1,0],[2,1])
Explanation: Right neighbor
End of explanation
solve_for_neighbor([1,1],[2,2])
Explanation: Right Anterior Neighbor
End of explanation
solve_for_neighbor([0,1],[1,2])
Explanation: Anterior Neighbor
End of explanation
solve_for_neighbor([-1,1],[0,2])
Explanation: Left Anterior Neighbor
End of explanation
solve_for_neighbor([-1,0],[0,1])
Explanation: Left Neighbor
End of explanation
solve_for_neighbor([-1,-1],[0,0])
Explanation: Left Posterior Neighbor
End of explanation
solve_for_neighbor([0,-1],[1,0])
Explanation: Posterior Neighbor
End of explanation
solve_for_neighbor([1,-1],[2,0])
Explanation: Right Posterior Neighbor
End of explanation |
15,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Astronomical Spectroscopy
To generate publication images change Matplotlib backend to nbagg
Step1: Spectral Lines
Spectral lines can be used to identify the chemical
composition of stars. If a light from a star is separeted
with a prism its spectrum of colours is crossed with
discrete lines. This can be also visualized as flux
of particural wavelengths. Flux is the total amount of
energy that crosses a unit area per unit time.
There are two types of spectral lines
Step2: Continuum Normalization
Step3: LAMOST versus Ondřejov
Cross matched spectum of BT CMi. | Python Code:
%matplotlib inline
import numpy as np
import astropy.analytic_functions
import astropy.io.fits
import matplotlib.pyplot as plt
wavelens = np.linspace(100, 30000, num=1000)
temperature = np.array([5000, 4000, 3000]).reshape(3, 1)
with np.errstate(all='ignore'):
flux_lam = astropy.analytic_functions.blackbody_lambda(
wavelens,
temperature
)
for flux, temp in zip(flux_lam, temperature.ravel()):
plt.plot(wavelens, flux, label='{} K'.format(temp))
plt.legend()
plt.xlabel('wavelength (Angstrom)')
plt.ylabel('flux')
plt.title('blackbody radiation graph')
plt.grid()
Explanation: Astronomical Spectroscopy
To generate publication images change Matplotlib backend to nbagg:
%matplotlib nbagg
Blackbody Radiation
A blackbody is a hypothetical object which is a perfect
absorber and emitter of radiation over all wavelengths.
The spectral flux distribution of blackbody's thermal
energy depends on its temperature.
Stars are often modelled as blackbodies in astronomy.
Their spectrum approximates the blackbody spectrum.
End of explanation
fig, (ax1, ax2) = plt.subplots(2, 1)
for ax in (ax1, ax2):
ax.set_xlabel('wavelength (Angstrom)')
ax.set_ylabel('flux')
ax.axvline(x=6562.8, color='black', label='H-alpha', linestyle='dashed')
ax.grid()
with astropy.io.fits.open('data/alpha-lyr-absorption.fits') as hdulist:
data = hdulist[1].data
ax1.plot(
data.field('spectral'),
data.field('flux')
)
ax1.set_title('absorption in spectrum of {}'.format(hdulist[1].header['OBJECT']))
with astropy.io.fits.open('data/gamma-cas-emission.fits') as hdulist:
data = hdulist[1].data
ax2.plot(
data.field('spectral'),
data.field('flux')
)
ax2.set_title('emission in spectrum of {}'.format(hdulist[1].header['OBJECT']))
fig.tight_layout()
Explanation: Spectral Lines
Spectral lines can be used to identify the chemical
composition of stars. If a light from a star is separeted
with a prism its spectrum of colours is crossed with
discrete lines. This can be also visualized as flux
of particural wavelengths. Flux is the total amount of
energy that crosses a unit area per unit time.
There are two types of spectral lines:
emission and
absorption lines.
End of explanation
fig, (ax1, ax2) = plt.subplots(2, 1)
for ax in (ax1, ax2):
ax.set_xlabel('wavelength (Angstrom)')
ax.set_ylabel('flux')
ax.grid()
with astropy.io.fits.open('data/bt-cmi-lamost.fits') as hdulist:
header = hdulist[0].header
start = header['CRVAL1']
delta = header['CD1_1']
pix = header['CRPIX1']
waves = np.array([10 ** (start + (i - pix + 1) * delta) for i in range(header['NAXIS1'])])
ax1.plot(waves, hdulist[0].data[0])
ax1.set_title('original spectrum')
ax2.plot(waves, hdulist[0].data[2])
ax2.set_title('spectrum with normalized continuum')
fig.tight_layout()
Explanation: Continuum Normalization
End of explanation
fig, (ax1, ax2) = plt.subplots(2, 1)
for ax in (ax1, ax2):
ax.set_xlabel('wavelength (Angstrom)')
ax.set_ylabel('flux')
ax.grid()
with astropy.io.fits.open('data/bt-cmi-ondrejov.fits') as hdulist:
header = hdulist[1].header
data = hdulist[1].data
ax1.plot(data.field('spectral'), data.field('flux'))
ax1.set_title('spectrum of {} from Ondřejov'.format(header['OBJECT']))
with astropy.io.fits.open('data/bt-cmi-lamost.fits') as hdulist:
# waves from previous code cell
ax2.plot(waves, hdulist[0].data[2])
ax2.set_title('spectrum of BT CMi from LAMOST')
fig.tight_layout()
Explanation: LAMOST versus Ondřejov
Cross matched spectum of BT CMi.
End of explanation |
15,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this exercise, you will leverage what you've learned to tune a machine learning model with cross-validation.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
Step1: You will work with the Housing Prices Competition for Kaggle Learn Users from the previous exercise.
Run the next code cell without changes to load the training and test data in X and X_test. For simplicity, we drop categorical variables.
Step2: Use the next code cell to print the first several rows of the data.
Step3: So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use SimpleImputer() to replace missing values in the data, before using RandomForestRegressor() to train a random forest model to make predictions. We set the number of trees in the random forest model with the n_estimators parameter, and setting random_state ensures reproducibility.
Step4: You have also learned how to use pipelines in cross-validation. The code below uses the cross_val_score() function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the cv parameter.
Step6: Step 1
Step7: Step 2
Step8: Use the next cell to visualize your results from Step 2. Run the code without changes.
Step9: Step 3 | Python Code:
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex5 import *
print("Setup Complete")
Explanation: In this exercise, you will leverage what you've learned to tune a machine learning model with cross-validation.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
train_data = pd.read_csv('../input/train.csv', index_col='Id')
test_data = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
train_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = train_data.SalePrice
train_data.drop(['SalePrice'], axis=1, inplace=True)
# Select numeric columns only
numeric_cols = [cname for cname in train_data.columns if train_data[cname].dtype in ['int64', 'float64']]
X = train_data[numeric_cols].copy()
X_test = test_data[numeric_cols].copy()
Explanation: You will work with the Housing Prices Competition for Kaggle Learn Users from the previous exercise.
Run the next code cell without changes to load the training and test data in X and X_test. For simplicity, we drop categorical variables.
End of explanation
X.head()
Explanation: Use the next code cell to print the first several rows of the data.
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50, random_state=0))
])
Explanation: So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use SimpleImputer() to replace missing values in the data, before using RandomForestRegressor() to train a random forest model to make predictions. We set the number of trees in the random forest model with the n_estimators parameter, and setting random_state ensures reproducibility.
End of explanation
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("Average MAE score:", scores.mean())
Explanation: You have also learned how to use pipelines in cross-validation. The code below uses the cross_val_score() function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the cv parameter.
End of explanation
def get_score(n_estimators):
Return the average MAE over 3 CV folds of random forest model.
Keyword argument:
n_estimators -- the number of trees in the forest
# Replace this body with your own code
pass
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
def get_score(n_estimators):
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators, random_state=0))
])
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=3,
scoring='neg_mean_absolute_error')
return scores.mean()
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
Explanation: Step 1: Write a useful function
In this exercise, you'll use cross-validation to select parameters for a machine learning model.
Begin by writing a function get_score() that reports the average (over three cross-validation folds) MAE of a machine learning pipeline that uses:
- the data in X and y to create folds,
- SimpleImputer() (with all parameters left as default) to replace missing values, and
- RandomForestRegressor() (with random_state=0) to fit a random forest model.
The n_estimators parameter supplied to get_score() is used when setting the number of trees in the random forest model.
End of explanation
results = ____ # Your code here
# Check your answer
step_2.check()
#%%RM_IF(PROD)%%
results = {}
for i in range(1,9):
results[50*i] = get_score(50*i)
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
Explanation: Step 2: Test different parameter values
Now, you will use the function that you defined in Step 1 to evaluate the model performance corresponding to eight different values for the number of trees in the random forest: 50, 100, 150, ..., 300, 350, 400.
Store your results in a Python dictionary results, where results[i] is the average MAE returned by get_score(i).
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(list(results.keys()), list(results.values()))
plt.show()
Explanation: Use the next cell to visualize your results from Step 2. Run the code without changes.
End of explanation
n_estimators_best = ____
# Check your answer
step_3.check()
#%%RM_IF(PROD)%%
n_estimators_best = min(results, key=results.get)
step_3.assert_check_passed()
#%%RM_IF(PROD)%%
n_estimators_best = 200
step_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.hint()
#_COMMENT_IF(PROD)_
step_3.solution()
Explanation: Step 3: Find the best parameter value
Given the results, which value for n_estimators seems best for the random forest model? Use your answer to set the value of n_estimators_best.
End of explanation |
15,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Datasets
TFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.
It handles downloading and preparing the data deterministically and constructing a tf.data.Dataset (or np.array).
Note
Step1: 사용 가능한 데이터세트 찾기
모든 데이터세트 빌더는 tfds.core.DatasetBuilder의 서브 클래스입니다. 사용 가능한 빌더의 목록을 얻으려면, tfds.list_builders()를 사용하거나 카탈로그를 살펴보세요.
Step2: 데이터세트 로드하기
tfds.load
데이터세트를 로드하는 가장 쉬운 방법은 tfds.load입니다.
데이터를 다운로드하여 tfrecord 파일로 저장합니다.
tfrecord를 로드하고 tf.data.Dataset를 생성합니다.
Step3: 몇 가지 일반적인 인수
Step4: tfds build CLI
특정 데이터 세트를 생성하려는 경우tfds 명령 줄을 사용할 수 있습니다. 예시
Step5: dict 키 이름과 구조를 찾으려면 카탈로그 의 데이터세트 설명서를 참조하세요(예
Step6: numpy로(tfds.as_numpy)
tfds.as_numpy를 사용하여 변환합니다.
tf.Tensor > np.array
tf.data.Dataset -> Iterator[Tree[np.array]](Tree는 임의의 중첩된 Dict, Tuple일 수 있음)
Step7: 일괄 처리된 tf.Tensor로(batch_size=-1)
batch_size=-1을 사용하여 전체 데이터세트를 단일 배치로 로드할 수 있습니다.
as_supervised=True 및 tfds.as_numpy와 결합하여 데이터를 (np.array, np.array)로 가져올 수 있습니다.
Step8: 데이터세트가 메모리에 저장하기 적합하고, 모든 예제의 형상이 같습니다.
데이터세트 벤치마킹
데이터세트를 벤치마킹하려면 모든 iterable(예
Step9: batch_size= kwarg를 사용하여 배치 크기별로 결과를 정규화하는 것을 잊지 마세요.
요약하면, tf.data.Dataset 추가 설정 시간(예
Step10: tfds.show_examples
tfds.show_examples는 matplotlib.figure.Figure를 반환합니다(현재 지원되는 이미지 데이터세트만).
Step11: 데이터세트 메타 데이터에 액세스하기
모든 빌더에는 데이터세트 메타 데이터를 포함하는 tfds.core.DatasetInfo 객체가 포함됩니다.
다음을 통해 액세스할 수 있습니다.
tfds.load API
Step12: tfds.core.DatasetBuilder API
Step13: 데이터세트 정보에는 데이터세트에 대한 추가 정보(버전, 인용, 홈페이지, 설명 등)가 포함됩니다.
Step14: 특성 메타 데이터(레이블 이름, 이미지 형상 등)
tfds.features.FeatureDict에 액세스합니다.
Step15: 클래스 수, 레이블 이름
Step16: 형상, dtype
Step17: 분할 메타 데이터(예
Step18: 사용 가능한 분할
Step19: 개별 분할에 대한 정보를 얻습니다.
Step20: 하위 분할 API와도 동작합니다. | Python Code:
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
Explanation: TensorFlow Datasets
TFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.
It handles downloading and preparing the data deterministically and constructing a tf.data.Dataset (or np.array).
Note: Do not confuse TFDS (this library) with tf.data (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around tf.data. If you're not familiar with this API, we encourage you to read the official tf.data guide first.
Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/datasets/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/datasets/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/datasets/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/datasets/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
설치
TFDS는 두 가지 패키지로 존재합니다.
pip install tensorflow-datasets: 안정적인 버전으로, 몇 개월마다 릴리스됩니다.
pip install tfds-nightly: 매일 릴리스되며, 데이터세트의 마지막 버전이 포함됩니다.
이 colab은 tfds-nightly를 사용합니다.
End of explanation
tfds.list_builders()
Explanation: 사용 가능한 데이터세트 찾기
모든 데이터세트 빌더는 tfds.core.DatasetBuilder의 서브 클래스입니다. 사용 가능한 빌더의 목록을 얻으려면, tfds.list_builders()를 사용하거나 카탈로그를 살펴보세요.
End of explanation
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
Explanation: 데이터세트 로드하기
tfds.load
데이터세트를 로드하는 가장 쉬운 방법은 tfds.load입니다.
데이터를 다운로드하여 tfrecord 파일로 저장합니다.
tfrecord를 로드하고 tf.data.Dataset를 생성합니다.
End of explanation
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
Explanation: 몇 가지 일반적인 인수:
split=: 읽을 분할(예: 'train', ['train', 'test'], 'train[80%:]',...). 분할 API 가이드를 참조하세요.
shuffle_files=: 각 epoch 간에 파일을 셔플할지 여부를 제어합니다(TFDS는 큰 데이터세트를 여러 개의 작은 파일에 저장합니다).
data_dir=: 데이터세트가 저장된 위치(기본값은 ~/tensorflow_datasets/)
with_info=True: 데이터세트 메타 데이터를 포함하는 tfds.core.DatasetInfo를 반환합니다.
download=False: 다운로드를 비활성화합니다.
tfds.builder
tfds.load는 tfds.core.DatasetBuilder를 둘러싼 얇은 래퍼입니다. tfds.core.DatasetBuilder API를 사용하여 같은 출력을 얻을 수 있습니다.
End of explanation
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
Explanation: tfds build CLI
특정 데이터 세트를 생성하려는 경우tfds 명령 줄을 사용할 수 있습니다. 예시:
sh
tfds build mnist
사용 가능한 플래그 는 문서 를 참조하십시오.
데이터세트 반복하기
dict
기본적으로 tf.data.Dataset 객체에는 tf.Tensor의 dict가 포함됩니다.
End of explanation
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
Explanation: dict 키 이름과 구조를 찾으려면 카탈로그 의 데이터세트 설명서를 참조하세요(예: mnist 문서).
튜플로(as_supervised=True)
as_supervised=True를 사용하면 감독된 데이터세트 대신 튜플 (features, label)을 얻을 수 있습니다.
End of explanation
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
Explanation: numpy로(tfds.as_numpy)
tfds.as_numpy를 사용하여 변환합니다.
tf.Tensor > np.array
tf.data.Dataset -> Iterator[Tree[np.array]](Tree는 임의의 중첩된 Dict, Tuple일 수 있음)
End of explanation
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
Explanation: 일괄 처리된 tf.Tensor로(batch_size=-1)
batch_size=-1을 사용하여 전체 데이터세트를 단일 배치로 로드할 수 있습니다.
as_supervised=True 및 tfds.as_numpy와 결합하여 데이터를 (np.array, np.array)로 가져올 수 있습니다.
End of explanation
ds = tfds.load('mnist', split='train')
ds = ds.batch(32).prefetch(1)
tfds.benchmark(ds, batch_size=32)
tfds.benchmark(ds, batch_size=32) # Second epoch much faster due to auto-caching
Explanation: 데이터세트가 메모리에 저장하기 적합하고, 모든 예제의 형상이 같습니다.
데이터세트 벤치마킹
데이터세트를 벤치마킹하려면 모든 iterable(예: tf.data.Dataset, tfds.as_numpy,...}에서 간단히 tfds.benchmark를 호출하면 됩니다.
End of explanation
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
Explanation: batch_size= kwarg를 사용하여 배치 크기별로 결과를 정규화하는 것을 잊지 마세요.
요약하면, tf.data.Dataset 추가 설정 시간(예: 버퍼 초기화 등)을 캡처하기 위해 첫 번째 웜업 배치가 다른 배치와 분리됩니다.
TFDS 자동 캐싱으로 인해 두 번째 반복이 훨씬 더 빨라진 것을 확인하세요.
tfds.benchmark는 추가 분석을 위해 검사할 수 있는 tfds.core.BenchmarkResult를 반환합니다.
엔드 투 엔드 파이프라인 빌드하기
더 진행하려면, 다음을 살펴볼 수 있습니다.
전체 훈련 파이프라인(배치 처리, 셔플링 등)을 확인하는 엔드 투 엔드 Keras 예제
파이프라인 속도 향상을 위한 성능 가이드(팁: tfds.benchmark(ds)를 사용하여 데이터세트 벤치마킹).
시각화
tfds.as_dataframe
tf.data.Dataset 객체는 Colab에서 시각화할 tfds.as_dataframe과 함께 pandas.DataFrame으로 변환할 수 있습니다.
이미지, 오디오, 텍스트, 동영상을 시각화하기 위해 tfds.core.DatasetInfo을 tfds.as_dataframe의 두 번째 인수로 추가합니다.
ds.take(x) 를 사용하여 처음 x 예제 만 표시합니다. pandas.DataFrame 은 메모리 내 전체 데이터세트를 로드하며 표시하는 데 비용이 많이들 수 있습니다.
End of explanation
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
Explanation: tfds.show_examples
tfds.show_examples는 matplotlib.figure.Figure를 반환합니다(현재 지원되는 이미지 데이터세트만).
End of explanation
ds, info = tfds.load('mnist', with_info=True)
Explanation: 데이터세트 메타 데이터에 액세스하기
모든 빌더에는 데이터세트 메타 데이터를 포함하는 tfds.core.DatasetInfo 객체가 포함됩니다.
다음을 통해 액세스할 수 있습니다.
tfds.load API:
End of explanation
builder = tfds.builder('mnist')
info = builder.info
Explanation: tfds.core.DatasetBuilder API:
End of explanation
print(info)
Explanation: 데이터세트 정보에는 데이터세트에 대한 추가 정보(버전, 인용, 홈페이지, 설명 등)가 포함됩니다.
End of explanation
info.features
Explanation: 특성 메타 데이터(레이블 이름, 이미지 형상 등)
tfds.features.FeatureDict에 액세스합니다.
End of explanation
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
Explanation: 클래스 수, 레이블 이름:
End of explanation
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
Explanation: 형상, dtype:
End of explanation
print(info.splits)
Explanation: 분할 메타 데이터(예: 분할 이름, 예제 수 등)
tfds.core.SplitDict에 액세스합니다.
End of explanation
print(list(info.splits.keys()))
Explanation: 사용 가능한 분할:
End of explanation
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
Explanation: 개별 분할에 대한 정보를 얻습니다.
End of explanation
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
Explanation: 하위 분할 API와도 동작합니다.
End of explanation |
15,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-VHR4
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
15,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment for the paper "Features for discourse-new referent detection in Russian
Replication of CICLing-2016 paper (Toldova and Ionov 2016)
To reproduce this experiment you will need
Step1: Reading the texts from GS and matching them to actual texts
Loading chains and GS
Step2: Loading special lists
Special lists load from the directory stored in lists_dir
Step3: Building indices and dictionaries
Building additional indices (of all words and all groups)
Step4: Building sets of adjectives and pronouns for feature selection
Step5: Creating a classifier
Step6: Training and testing
Step7: Baseline
Baseline condition
Step8: String features
Step9: String + Struct features
Step10: All features
Step11: Calculating feature importances
Step12: Additional actions
Counting feature importances for bag-of-adjectives classifier
Step13: Getting feature distributions | Python Code:
%cd '/Users/max/Projects/Coreference/'
%cd 'rucoref'
from anaphoralib.corpora import rueval
from anaphoralib.tagsets import multeast
from anaphoralib.experiments.base import BaseClassifier
from anaphoralib import utils
from anaphoralib.experiments import utils as exp_utils
%cd '..'
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from imblearn.over_sampling import BorderlineSMOTE
import numpy as np
%matplotlib inline
lists_dir = 'CICLing-2016/wordlists'
texts_dir = 'Corpus-2015/Tokens.txt'
gs_dir = 'Corpus-2015/Groups.txt'
tagset = multeast
random_state = 42
Explanation: Experiment for the paper "Features for discourse-new referent detection in Russian
Replication of CICLing-2016 paper (Toldova and Ionov 2016)
To reproduce this experiment you will need:
1. RuCor corpus (from 2015-10-29)
2. Python modules:
* scikit-learn (v. 0.22.1)
* imbalanced-learn (v. 0.6.2)
* matplotlib (v. 3.1.3)
2. anaphoralib Python module
Since anaphoralib is in an early stage of development, there is no way to install it yet, so in order to import it, you should cd to the folder with the module. Paths to the corpus should be updated accordingly.
End of explanation
rucoref = rueval.RuCorefCorpus(multeast, rueval)
exp_utils.load_corpus(rucoref, texts_dir, gs_dir)
rucoref.groups[0][:30]
rucoref.print_stats()
rucoref.create_indices()
Explanation: Reading the texts from GS and matching them to actual texts
Loading chains and GS
End of explanation
def load_list(filename):
data = set()
with open(filename, encoding='utf-8') as inp_file:
for line in inp_file:
data.add(line.strip('\r\n'))
return data
import os
wordlists = {}
for filename in os.listdir(lists_dir):
wordlists[filename.replace('.txt', '')] = load_list(os.path.join(lists_dir, filename))
print(wordlists.keys())
Explanation: Loading special lists
Special lists load from the directory stored in lists_dir
End of explanation
import collections
word_index = []
group_index = []
for i, text in enumerate(rucoref.texts):
word_index.append(collections.defaultdict(set))
group_index.append(collections.defaultdict(set))
for word in text:
word_index[-1]['_'.join(word.lemma)].add(word.offset)
for group in rucoref.groups[i]:
for g in group.iter_groups():
group_index[-1]['_'.join(g.lemma)].add(g.offset)
print('\n'.join(list(group_index[0].keys())[:30]))
Explanation: Building indices and dictionaries
Building additional indices (of all words and all groups):
End of explanation
adjectives = set()
for text in rucoref.texts:
for word in text:
if tagset.pos_filters['adj'](word) and (len(word.tag) < 7 or word.tag[6] == 'f'):
adjectives.add('_'.join(word.lemma))
adjectives = list(adjectives)
adjectives
pronouns = set()
for text in rucoref.texts:
for word in text:
if tagset.pos_filters['pronoun'](word):
pronouns.add('_'.join(word.lemma))
pronouns = list(pronouns)
pronouns
Explanation: Building sets of adjectives and pronouns for feature selection:
End of explanation
import re
class FirstMentionClassifier(BaseClassifier):
def __init__(self):
super(FirstMentionClassifier, self).__init__()
self.feat_zones_ = ('struct', 'string', 'lists')
self.stats = {'str_matches_before', 'head_matches_before', 'n_adj', 'len_np'}
self.rx_lat = re.compile('[A-Za-z]')
self.pronouns = {u"его", u"ее", u"её", u"ей", u"ему", u"ею", u"им", u"ими", u"их", u"которая",
u"которого", u"которое", u"которой", u"котором", u"которому", u"которую", u"которые",
u"который", u"которым", u"которыми", u"которых", u"него", u"нее", u"неё", u"ней", u"нем",
u"нём", u"нему", u"нею", u"ним", u"ними", u"них", u"он", u"она", u"они", u"оно", u"свое",
u"своё", u"своего", u"своей", u"своем", u"своём", u"своему", u"своею", u"свой", u"свои",
u"своим", u"своими", u"своих", u"свою", u"своя", u"себе", u"себя", u"собой", u"собою"}
self.clear_stats()
def get_feature_vector(self, corpus, group, i_text, save_feature_names=False):
if save_feature_names:
self.feature_names_ = []
vctr = []
group_lemma = '_'.join(group.lemma)
group_occurrences = group_index[i_text][group_lemma] if group_lemma in group_index[i_text] else []
head_index = group.head
head_lemma = group.lemma[group.head]
head_occurrences = word_index[i_text][head_lemma] if head_lemma in word_index[i_text] else []
head_offset = group.head_offset
group_words = group.words if group.type != 'word' else [group]
str_matches_before = sum(1 for occ in group_occurrences if occ < group.offset)
head_matches_before = sum(1 for occ in head_occurrences if occ < group.offset)
adj_in_group = [word for word in group_words[:head_index+1] if tagset.pos_filters['adj'](word)]
self.stats['str_matches_before'].append(str_matches_before)
self.stats['head_matches_before'].append(head_matches_before)
self.stats['n_adj'].append("{}: {}".format(len(adj_in_group), group_lemma))
self.stats['len_np'].append("{}: {}".format(len(group_words), group_lemma))
if 'string' in self.feat_zones_:
vctr.append(('str_match_before=0', str_matches_before == 0))
vctr.append(('str_match_before<2', str_matches_before < 2))
vctr.append(('str_match_before<3', str_matches_before < 3))
vctr.append(('str_match_before>2', str_matches_before > 2))
vctr.append(('head_match_before=0', head_matches_before == 0))
vctr.append(('head_match_before<2', head_matches_before < 2))
vctr.append(('head_match_before<3', head_matches_before < 3))
vctr.append(('head_match_before>2', head_matches_before > 2))
vctr.append(('uppercase', all(word.isupper() and len(word) > 1 for word in group.wordform)))
#vctr.append(('capitalized', any(word[0].isupper() and len(group.wordform) > 1 for word in group.wordform[1:])))
vctr.append(('latin', any(self.rx_lat.search(word) for word in group.wordform)))
vctr.append(('is_proper_noun', corpus.tagset.pos_filters['properNoun'](group)))
#vctr.append(('is_pronoun', group.lemma[0] in pronouns))
vctr.append(('is_pronoun', group.wordform[0] in self.pronouns))
#vctr.append(('is_pronoun', multeast.pos_filters['pronoun'](group) or group.wordform[0] in pronouns))
self.n_pronouns += 1
if 'struct' in self.feat_zones_:
i_word = corpus.words_index[i_text][group.offset]
left_word = corpus.texts[i_text][i_word - 1] if i_word > 0 else None
right_word = corpus.texts[i_text][i_word + len(group.wordform) + 1] \
if i_word + len(group.wordform) + 1 < len(corpus.texts[i_text]) else None
vctr.append(('conj', bool((left_word and corpus.tagset.pos_filters['conj'](left_word))
or (right_word and corpus.tagset.pos_filters['conj'](right_word)))))
vctr.append(('len_np<2', len(group.tags) < 2))
vctr.append(('len_np>2', len(group.tags) > 2))
vctr.append(('n_adj=0', len(adj_in_group) == 0))
vctr.append(('n_adj>1', len(adj_in_group) > 1))
vctr.append(('n_adj>2', len(adj_in_group) > 2))
if 'lists' in self.feat_zones_:
for l in wordlists:
vctr.append(('in_list_{}'.format(l), any(lemma in wordlists[l] for lemma in group.lemma[:head_index+1])))
if save_feature_names:
self.feature_names_ = [feat[0] for feat in vctr]
return [int(feat[1]) for feat in vctr]
def prepare_data(self, corpus, random_state=42, test_size=0.3, feature_zones=None):
if feature_zones:
self.feat_zones_ = feature_zones
self.n_pronouns = 0
self.stats['class'] = []
self.groups = []
self.x_data = []
self.y_data = []
self.cur_data_ = 'Binary, filtered singletons'
self.class_names_ = ('non-first', 'first')
save_features = True
for i_text, text in enumerate(corpus.texts):
for i, mention in enumerate(corpus.mentions[i_text]):
if i not in rucoref.gs_index[i_text]:
continue
cur_gs_group_id = corpus.gs_index[i_text][i]
cur_chain = corpus.gs[i_text]['chains'][corpus.chains_index[i_text][cur_gs_group_id]]
self.y_data.append(self.class_names_.index('first') if cur_gs_group_id == cur_chain[0]
else self.class_names_.index('non-first'))
group = corpus.heads_index[i_text][mention.offset]
self.x_data.append(self.get_feature_vector(corpus, group, i_text, save_features))
self.groups.append(group)
self.stats['class'].append(self.class_names_[self.y_data[-1]])
save_features = False
pronoun_index = self.feature_names_.index('is_pronoun')
if self.x_data[-1][pronoun_index]:
self.x_data.pop()
self.y_data.pop()
self.groups.pop()
for key in self.stats:
self.stats[key].pop()
continue
del self.x_data[-1][pronoun_index]
super(FirstMentionClassifier, self).prepare_data(corpus, random_state, test_size)
del self.feature_names_[pronoun_index]
class_numbers = [sum(1 for item in self.y_data if item == cur_class) for cur_class in range(len(self.class_names_))]
self.ratio = float(min(class_numbers) / float(max(class_numbers)))
Explanation: Creating a classifier
End of explanation
first_mention_clf = FirstMentionClassifier()
first_mention_clf.prepare_data(rucoref, random_state=random_state, test_size=0.3)
first_mention_clf.stats.keys()
Explanation: Training and testing
End of explanation
def baseline_predict(data):
y_pred = np.zeros(len(data))
for i, row in enumerate(data):
y_pred[i] = row[0] == 1
return y_pred
first_mention_clf.test(y_pred=baseline_predict(first_mention_clf.x_data_test), test_name='baseline')
Explanation: Baseline
Baseline condition: NP is a first mention if there is no such exact string in the text before
End of explanation
first_mention_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string',))
clf = RandomForestClassifier(n_estimators=500, random_state=random_state)
sampler = BorderlineSMOTE(sampling_strategy=first_mention_clf.ratio, kind='borderline-1', random_state=random_state)
first_mention_clf.fit(clf, sampler)
first_mention_clf.test(test_name='string features')
first_mention_clf.print_stats()
Explanation: String features
End of explanation
first_mention_clf = FirstMentionClassifier()
first_mention_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct'))
clf = RandomForestClassifier(n_estimators=500, random_state=random_state)
sampler = BorderlineSMOTE(sampling_strategy=first_mention_clf.ratio, kind='borderline-1', random_state=random_state)
first_mention_clf.fit(clf, sampler)
first_mention_clf.test(test_name='string+struct features')
Explanation: String + Struct features
End of explanation
first_mention_clf = FirstMentionClassifier()
first_mention_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists'))
clf = RandomForestClassifier(n_estimators=500, random_state=random_state)
sampler = BorderlineSMOTE(sampling_strategy=first_mention_clf.ratio, kind='borderline-1', random_state=random_state)
first_mention_clf.fit(clf, sampler)
first_mention_clf.test(test_name='all features')
Explanation: All features
End of explanation
first_mention_clf = FirstMentionClassifier()
first_mention_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists'))
regr = LogisticRegression(random_state=random_state, max_iter=250)
sampler = BorderlineSMOTE(sampling_strategy=first_mention_clf.ratio, kind='borderline-1', random_state=random_state)
first_mention_clf.fit(regr, sampler)
for i, feat_name in enumerate(first_mention_clf.feature_names_):
print('{}: {:.4f}'.format(feat_name, regr.coef_[0,i]))
Explanation: Calculating feature importances
End of explanation
import sklearn.feature_extraction.text
adj_vectorizer = sklearn.feature_extraction.text.CountVectorizer(vocabulary=adjectives)
pron_vectorizer = sklearn.feature_extraction.text.CountVectorizer(vocabulary=pronouns)
def additional_features(data, vectorizer):
additional_features = np.zeros(shape=(len(data), len(vectorizer.vocabulary)))
for i, row in enumerate(data):
additional_features[i,:] = vectorizer.transform([u' '.join(row.lemma)]).toarray()
return additional_features
from sklearn.preprocessing import MinMaxScaler
def rank_to_dict(ranks, names, order=1):
minmax = MinMaxScaler()
ranks = minmax.fit_transform(order*np.array([ranks]).T).T[0]
ranks = map(lambda x: round(x, 4), ranks)
return dict(zip(names, ranks ))
add_data_x = additional_features(first_mention_clf.groups_train, adj_vectorizer)
adj_clf = RandomForestClassifier(random_state=random_state)
adj_clf.fit(add_data_x, first_mention_clf.y_data_train)
ranks = rank_to_dict(adj_clf.feature_importances_, adj_vectorizer.vocabulary)
for feat_name in sorted(ranks, key=lambda f: ranks[f], reverse=True):
print(feat_name, ranks[feat_name])
Explanation: Additional actions
Counting feature importances for bag-of-adjectives classifier
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import anaphoralib.experiments.utils
first_mention_clf = FirstMentionClassifier()
first_mention_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists'))
feature_distributions = {}
for feat_name in first_mention_clf.stats:
feature_distributions[feat_name] = {cls: [] for cls in first_mention_clf.class_names_ + ('total',)}
for i, elem in enumerate(first_mention_clf.stats['class']):
feature_distributions[feat_name][elem].append(first_mention_clf.stats[feat_name][i])
feature_distributions[feat_name]['total'].append(first_mention_clf.stats[feat_name][i])
import os
anaphoralib.experiments.utils.latexify(columns=2)
for feat_name in feature_distributions:
if feat_name == 'class':
continue
anaphoralib.experiments.utils.plot_feature_distribution(feature_distributions[feat_name],
range(7),
first_mention_clf.class_names_,
x_label=feat_name.replace('_', '\\_'),
filename=os.path.join('CICLing-2016', feat_name))
from sklearn.model_selection import learning_curve
from sklearn.metrics import make_scorer, f1_score
from sklearn.utils import shuffle
first_mention_clf = FirstMentionClassifier()
first_mention_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists'))
clf = RandomForestClassifier(n_estimators=500, random_state=random_state)
shuffled_x_data, shuffled_y_data = shuffle(first_mention_clf.x_data, first_mention_clf.y_data,
random_state=random_state)
train_sizes_abs, train_scores, test_scores = learning_curve(clf,
shuffled_x_data,
shuffled_y_data,
cv=3,
scoring=make_scorer(f1_score, pos_label=1))
anaphoralib.experiments.utils.latexify(columns=2)
anaphoralib.experiments.utils.plot_learning_curve(train_sizes_abs,
train_scores, test_scores,
score_name='f1',
filename=os.path.join('CICLing-2016', 'learning_curve_plot'))
Explanation: Getting feature distributions
End of explanation |
15,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook
Step1: 1 - Exploring data with one dimension (time) with size > 1
Following cell downloads a datataset from eurostat. If the file is already downloaded use the copy presents on the disk. Caching file is useful to avoid downloading dataset every time notebook runs. Caching can speed the development, and provides consistent results.
You can see the raw data here
Step2: Initialize JsonStatCollection with eurostat data and print some info about the collection.
Step3: Previous collection contains only a dataset named 'nama_gdp_c'
Step4: All dimensions of the dataset 'nama_gdp_c' are of size 1 with exception of time dimension. Let's explore the time dimension.
Step5: Get value for year 2012.
Step6: Convert the jsonstat data into a pandas dataframe.
Step7: Adding a simple plot
Step8: 2 - Exploring data with two dimensions (geo, time) with size > 1
Download or use the jsonstat file cached on disk. The cache is used to avoid internet download during the devolopment to make the things a bit faster.
You can see the raw data here | Python Code:
# all import here
from __future__ import print_function
import os
import pandas as pd
import jsonstat
import matplotlib as plt
%matplotlib inline
Explanation: Notebook: using jsonstat.py with eurostat api
This Jupyter notebook shows the python library jsonstat.py in action.
It shows how to explore dataset downloaded from a data provider. This notebook uses some datasets from Eurostat. Eurostat provides a rest api to download its datasets. You can find details about the api here
It is possible to use a query builder for discovering the rest api parameters. The following image shows the query builder: <img src="eurostat_query_builder_step2.png" width="50%" height="50%"/>
End of explanation
url_1 = 'http://ec.europa.eu/eurostat/wdds/rest/data/v1.1/json/en/nama_gdp_c?precision=1&geo=IT&unit=EUR_HAB&indic_na=B1GM'
file_name_1 = "eurostat-name_gpd_c-geo_IT.json"
file_path_1 = os.path.abspath(os.path.join("..", "tests", "fixtures", "www.ec.europa.eu_eurostat", file_name_1))
if os.path.exists(file_path_1):
print("using already donwloaded file {}".format(file_path_1))
else:
print("download file")
jsonstat.download(url_1, file_name_1)
file_path_1 = file_name_1
Explanation: 1 - Exploring data with one dimension (time) with size > 1
Following cell downloads a datataset from eurostat. If the file is already downloaded use the copy presents on the disk. Caching file is useful to avoid downloading dataset every time notebook runs. Caching can speed the development, and provides consistent results.
You can see the raw data here
End of explanation
collection_1 = jsonstat.from_file(file_path_1)
collection_1
Explanation: Initialize JsonStatCollection with eurostat data and print some info about the collection.
End of explanation
nama_gdp_c_1 = collection_1.dataset('nama_gdp_c')
nama_gdp_c_1
Explanation: Previous collection contains only a dataset named 'nama_gdp_c'
End of explanation
nama_gdp_c_1.dimension('time')
Explanation: All dimensions of the dataset 'nama_gdp_c' are of size 1 with exception of time dimension. Let's explore the time dimension.
End of explanation
nama_gdp_c_1.value(time='2012')
Explanation: Get value for year 2012.
End of explanation
df_1 = nama_gdp_c_1.to_data_frame('time', content='id')
df_1.tail()
Explanation: Convert the jsonstat data into a pandas dataframe.
End of explanation
df_1 = df_1.dropna() # remove rows with NaN values
df_1.plot(grid=True, figsize=(20,5))
Explanation: Adding a simple plot
End of explanation
url_2 = 'http://ec.europa.eu/eurostat/wdds/rest/data/v1.1/json/en/nama_gdp_c?precision=1&geo=IT&geo=FR&unit=EUR_HAB&indic_na=B1GM'
file_name_2 = "eurostat-name_gpd_c-geo_IT_FR.json"
file_path_2 = os.path.abspath(os.path.join("..", "tests", "fixtures", "www.ec.europa.eu_eurostat", file_name_2))
if os.path.exists(file_path_2):
print("using alredy donwloaded file {}".format(file_path_2))
else:
print("download file and storing on disk")
jsonstat.download(url, file_name_2)
file_path_2 = file_name_2
collection_2 = jsonstat.from_file(file_path_2)
nama_gdp_c_2 = collection_2.dataset('nama_gdp_c')
nama_gdp_c_2
nama_gdp_c_2.dimension('geo')
nama_gdp_c_2.value(time='2012',geo='IT')
nama_gdp_c_2.value(time='2012',geo='FR')
df_2 = nama_gdp_c_2.to_table(content='id',rtype=pd.DataFrame)
df_2.tail()
df_FR_IT = df_2.dropna()[['time', 'geo', 'Value']]
df_FR_IT = df_FR_IT.pivot('time', 'geo', 'Value')
df_FR_IT.plot(grid=True, figsize=(20,5))
df_3 = nama_gdp_c_2.to_data_frame('time', content='id', blocked_dims={'geo':'FR'})
df_3 = df_3.dropna()
df_3.plot(grid=True,figsize=(20,5))
df_4 = nama_gdp_c_2.to_data_frame('time', content='id', blocked_dims={'geo':'IT'})
df_4 = df_4.dropna()
df_4.plot(grid=True,figsize=(20,5))
Explanation: 2 - Exploring data with two dimensions (geo, time) with size > 1
Download or use the jsonstat file cached on disk. The cache is used to avoid internet download during the devolopment to make the things a bit faster.
You can see the raw data here
End of explanation |
15,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SVD Practice.
2018/2/12 - WNixalo
Fastai Computational Linear Algebra (2017) §2
Step1: Wait so.. the rows of a matrix $A$ are orthogonal iff $AA^T$ is diagonal? Hmm. Math.StackEx Link
Step2: Wait but that also gives True for $VV^T$. Hmmm.
2. Truncated SVD
Okay, so SVD is an exact decomposition of a matrix and allows us to pull out distinct topics from data (due to their orthonormality (orthogonality?)).
But doing so for a large data corpus is ... bad. Especially if most of the data's meaning / information relevant to us is captured by a small prominent subset. IE
Step3: The idea of T-SVD is that we want to compute an approximation to the range of $A$. The range of $A$ is the space covered by the column basis.
ie | Python Code:
from scipy.stats import ortho_group
import numpy as np
Q = ortho_group.rvs(dim=3)
B = np.random.randint(0,10,size=(3,3))
A = Q@B@Q.T
U,S,V = np.linalg.svd(A, full_matrices=False)
U
S
V
for i in range(3):
print(U[i] @ U[(i+1) % len(U)])
# wraps around
# U[0] @ U[1]
# U[1] @ U[2]
# U[2] @ U[0]
for i in range(len(U)):
print(U[:,i] @ U[:, (i+1)%len(U[0])])
Explanation: SVD Practice.
2018/2/12 - WNixalo
Fastai Computational Linear Algebra (2017) §2: Topic Modeling w NMF & SVD
facebook research: Fast Randomized SVD
1. Singular-Value Decomposition
SVD is a factorization of a real or complex matrix. It factorizes a matrix $A$ into one with orthogonal columns $V^T$, one with orthogonal rows $U$, and a diagonal matrix of singular values $Σ$ (aka $S$ or $s$ or $σ$) which contains the relative importance of each factor.
End of explanation
np.isclose(np.eye(len(U)), U @ U.T)
np.isclose(np.eye(len(V)), V.T @ V)
Explanation: Wait so.. the rows of a matrix $A$ are orthogonal iff $AA^T$ is diagonal? Hmm. Math.StackEx Link
End of explanation
from sklearn import decomposition
# ofc this is just dummy data to test it works
datavectors = np.random.randint(-1000,1000,size=(10,50))
U,S,V = decomposition.randomized_svd(datavectors, n_components=5)
U.shape, S.shape, V.shape
Explanation: Wait but that also gives True for $VV^T$. Hmmm.
2. Truncated SVD
Okay, so SVD is an exact decomposition of a matrix and allows us to pull out distinct topics from data (due to their orthonormality (orthogonality?)).
But doing so for a large data corpus is ... bad. Especially if most of the data's meaning / information relevant to us is captured by a small prominent subset. IE: prevalence of articles like a and the are likely poor indicators of any particular meaning in a piece of text since they're everywhere in English. Likewise for other types of data.
Hmm, so, if I understood correctly, the Σ/S/s/σ matrix is ordered by value max$\rightarrow$min.. but computing the SVD of a large dataset $A$ is exactly what we want to avoid using T-SVD. Okay so how?
$\rightarrow$Full SVD we're calculating the full dimension of topics -- but its handy to limit to the most important ones -- this is how SVD is used in compression.
Aha. This is where I was confused. Truncation is used with Randomization in R-SVD. The Truncated section was just introducing the concept. Got it.
So that's where, in R-SVD, we use a buffer in addition to the portion of the dataset we take for SVD.
And yay scikit-learn has R-SVD built in.
End of explanation
# workflow w NMF is something like this
V = np.random.randint(0, 20, size=(10,10))
m,n = V.shape
d = 5 # num_topics
clsf = decomposition.NMF(n_components=d, random_state=1)
W1 = clsf.fit_transform(V)
H1 = clsf.components_
Explanation: The idea of T-SVD is that we want to compute an approximation to the range of $A$. The range of $A$ is the space covered by the column basis.
ie: Range(A) = {y: Ax = y}
that is: all $y$ you can achieve by multiplying $x$ with $A$.
Depending on your space, the bases are vectors that you can take linear combinations of to get any value in your space.
3. Details of Randomized SVD (Truncated)
Our goal is to have an algorithm to perform Truncated SVD using Randomized values from the dataset matrix. We want to use randomization to calculate the topics we're interested in, instead of calculating all of them.
Aha. So.. the way to do that, using randomization, is to have a special kind of randomization. Find a matrix $Q$ with some special properties that will allow us to pull a matrix that is a near match to our dataset matrix $A$ in the ways we want it to be. Ie: It'll have the same singular values, meaning the same importance-ordered topics.
Wow mathematics is really.. somethin.
That process:
Compute an approximation to the range of $A$. ie: we want $Q$ with $r$ orthonormal columns st:
$$A \approx QQ^TA$$
Construct $B = Q^TA,$, which is small $(r \times n)$
Compute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$), $B = SΣV^T$
Since: $$A \approx QQ^TA = Q(SΣV^T)$$ if we set $U = QS$, then we have a low-rank approximation of $A \approx UΣV^T$.
-- okay so.. confusion here. What is $S$ and $Σ$? Because I see them elsewhere taken to mean the same thing on this subject, but all of a sudden they seem to be totally different things.
-- oh, so apparently $S$ here is actually something different. $Σ$ is what's been interchangeably referred to in Hellenic/Latin letters throughout the notebook.
NOTE that $A: m \times n$ while $Q: m \times r$, so $Q$ is generally a tall, skinny matrix and therefore much smaller & easier to compute with than $A$.
Also, because $S$ & $Q$ are both orthonormal, setting $R = QS$ makes $R$ orthonormal as well.
How do we find Q (in step 1)?
General Idea: we find this special $Q$, then we do SVD on this smaller matrix $Q^TA$, and we plug that back in to have our Truncated-SVD for $A$.
And HERE is where the Random part of Randomized SVD comes in! How do we find $Q$?:
We just take a bunch of random vectors $w_i$ and look at / evaluate the subspace formed by $Aw_i$. We form a matrix $W$ with the $w_i$'s as its columns. Then we take the QR Decomposition of $AW = QR$. Then the colunms of $Q$ form an orthonormal basis for $AW$, which is the range of $A$.
Basically a QR Decomposition exists for any matrix, and is an orthonormal matrix $\times$ an upper triangular matrix.
So basically: we take $AW$, $W$ is random, get the $QR$ -- and a property of the QR-Decomposition is that $Q$ forms an orthonormal basis for $AW$ -- and $AW$ gives the range of $A$.
Since $AW$ has far more rows than columns, it turns out in practice that these columns are approximately orthonormal. It's very unlikely you'll get linearly-dependent columns when you choose random values.
Aand apparently the QR-Decomp is v.foundational to Numerical Linear Algebra.
How do we choose r?
We chose $Q$ to have $r$ orthonormal columns, and $r$ gives us the dimension of $B$.
We choose $r$ to be the number of topics we want to retrieve $+$ some buffer.
See the lesson notebook and accompanying lecture time for an implementatinon of Randomized SVD. NOTE that Scikit-Learn's implementation is more powerful; the example is for example purposes.
4. Non-negative Matrix Factorization
Wiki
NMF is a group of algorithms in multivariate analysis and linear algebra where a matrix $V$ is factorized into (usually) two matrices $W$ & $H$, with the property that all three matrices have no negative elements.
Lecture 2 40:32
The key thing in SVD is orthogonality -- basically everything is orthogonal to eachother -- the key idea in NMF is that nothing is negative. The lower-bound is zero-clamped.
NOTE your original dataset shoudl be nonnegative if you use NMF, or else you won't be able to reconstruct it.
Idea
Rather than constraining our factors to be orthogonal, another idea would be to constrain them to be non-negative. NMF is a factorization of a non-negative dataset $V$: $$V=WH$$ into non-negative matrices $W$, $H$. Often positive factors will be more easily interpretable (and this is the reason behind NMF's popularity).
huh.. really now.?..
For example if your dataset is a matrix of faces $V$, where each columns holds a vectorized face, then $W$ would be a matrix of column facial features, and $H$ a matrix of column relative importance of features in each image.
Applications of NMF / Sklearn
NMF is a 'difficult' problem because it is unconstrained and NP-Hard
NMF looks smth like this in schematic form:
Documents Topics Topic Importance Indicators
W --------- --- -----------------
o | | | | | ||| | | | | | | | | |
r | | | | | ≈ ||| -----------------
d | | | | | |||
s --------- ---
V W H
End of explanation |
15,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
Step3: 3. Calculate the basic descriptive statistics on the data
Step4: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step5: There seems to be a highly positive correlation between both variables, as shown by the coefficient of correlation, which equals 0.92.
Step6: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step7: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step8: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 | Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv("hanford.csv")
Explanation: 2. Read in the hanford.csv file
End of explanation
df.head()
Explanation: <img src="images/hanford_variables.png">
End of explanation
df.describe()
Explanation: 3. Calculate the basic descriptive statistics on the data
End of explanation
df.corr()
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
df.plot(kind='scatter', x='Exposure', y='Mortality')
Explanation: There seems to be a highly positive correlation between both variables, as shown by the coefficient of correlation, which equals 0.92.
End of explanation
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
intercept, slope = lm.params
def mortality_rate(exposure):
for item in df['Exposure']:
mortality = exposure * slope + intercept
return mortality
mortality_rate(3)
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
ax = df.plot(kind='scatter', x= 'Exposure', y='Mortality')
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="green")
det_corr = (df.corr())* (df.corr())
det_corr
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
mortality_rate(10)
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation |
15,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy를 활용한 선형대수 입문
선형대수(linear algebra)는 데이터 분석에 필요한 각종 계산을 위한 기본적인 학문이다.
데이터 분석을 하기 위해서는 실제로 수많은 숫자의 계산이 필요하다. 하나의 데이터 레코드(record)가 수십개에서 수천개의 숫자로 이루어져 있을 수도 있고 수십개에서 수백만개의 이러한 데이터 레코드를 조합하여 계산하는 과정이 필요할 수 있다.
선형대수를 사용하는 첫번째 장점은 이러한 데이터 계산의 과정을 아주 단순한 수식으로 서술할 수 있다는 점이다. 그러기 위해서는 선형대수에서 사용되는 여러가지 기호와 개념을 익혀야 한다.
데이터의 유형
선형대수에서 다루는 데이터는 크게 스칼라(scalar), 벡터(vector), 행렬(matrix), 이 세가지 이다.
간단하게 말하자면 스칼라는 숫자 하나로 이루어진 데이터이고 벡터는 여러개의 숫자로 이루어진 데이터 레코드이며 행렬은 벡터, 즉 데이터 레코드가 여러개 있는 데이터 집합이라고 볼 수 있다.
스칼라
스칼라는 하나의 숫자를 말한다. 예를 들어 어떤 붓꽃(iris) 샘플의 꽃잎의 길이를 측정하는 하나의 숫자가 나올 것이다. 이 숫자는 스칼라이다.
스칼라는 보통 $x$와 같이 알파벳 소문자로 표기하며 실수(real number)인 숫자 중의 하나이므로 실수 집합의 원소라는 의미에서 다음과 같이 표기한다.
$$ x \in \mathbb{R} $$
벡터
벡터는 복수개의 숫자가 특정 순서대로 모여 있는 것을 말한다. 사실 대부분의 데이터 분석에서 하나의 데이터 레코드는 여러개의 숫자로 이루어진 경우가 많다. 예를 들어 붓꽃의 종을 알아내기 위해 크기를 측정하게 되면 꽃잎의 길이 $x_1$ 뿐 아니라 꽃잎의 폭 $x_2$ , 꽃받침의 길이 $x_3$ , 꽃받침의 폭 $x_4$ 이라는 4개의 숫자를 측정할 수 있다. 이렇게 측정된 4개의 숫자를 하나의 쌍(tuple) $x$ 로 생각하여 다음과 같이 표기한다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
x_{3} \
x_{4} \
\end{bmatrix}
$$
여기에서 주의할 점은 벡터는 복수의 행(row)을 가지고 하나의 열(column)을 가지는 형태로 위에서 아래로 표기한다는 점이다.
이 때 $x$는 4개의 실수(real number)로 이루어져 있기 때문에 4차원 벡터라고 하고 다음과 같이 4차원임을 표기한다.
$$ x \in \mathbb{R}^4 $$
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 벡터를 feature vector라고 하기도 한다.
만약 4개가 아니라 $N$개의 숫자가 모여 있는 경우의 표기는 다음과 같다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
,\;\;\;\;
x \in \mathbb{R}^N
$$
NumPy를 사용하면 벡터는 1차원 ndarray 객체 혹은 열의 갯수가 1개인 2차원 ndarray 객체로 표현한다. 벡터를 처리하는 프로그램에 따라서는 두 가지 중 특정한 형태만 원하는 경우도 있을 수 있기 때문에 주의해야 한다. 예를 들어 파이썬 scikit-learn 패키지에서는 벡터를 요구하는 경우에 열의 갯수가 1개인 2차원 ndarray 객체를 선호한다.
Step1: 행렬
행렬은 복수의 차원을 가지는 데이터 레코드가 다시 여러개 있는 경우의 데이터를 합쳐서 표기한 것이다. 예를 들어 앞서 말한 붓꽃의 예에서 6개의 붓꽃에 대해 크기를 측정하였다면 4차원 붓꽃 데이터가 6개가 있다. 즉, $4 \times 6 = 24$개의 실수 숫자가 있는 것이다. 이 숫자 집합을
행렬로 나타내면 다음과 같다. 행렬은 보통 $X$와 같이 알파벳 대문자로 표기한다.
$$X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
$$
행렬 안에서 원소의 위치를 표기할 때는 $x_{2, 3}$ 처럼 두 개의 숫자 쌍을 아랫 첨자(sub-script)로 붙여서 표기한다. 첫번째 숫자가 행(row)을 뜻하고 두번째 숫자가 열(column)을 뜻한다. 예를 들어 $x_{2, 3}$ 는 두번째 행(위에서 아래로 두번째), 세번째 열(왼쪽에서 오른쪽으로 세번째)의 숫자를 뜻한다.
붓꽃의 예에서는 하나의 데이터 레코드가 4차원이였다는 점을 기억하자. 따라서 이 행렬 표기에서는 하나의 행(row)이 붓꽃 하나에 대한 데이터 레코드가 된다.
하나의 데이터 레코드를 나타낼 때는 하나의 열(column)로 나타내고 복수의 데이터 레코드 집합을 나타낼 때는 하나의 데이터 레코드가 하나의 행(row)으로 표기하는 것은 일관성이 없어 보지만 데이터 분석에서 쓰는 일반적인 관례이므로 익히고 있어야 한다.
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 행렬를 feature matrix라고 하기도 한다.
이 행렬의 크기를 수식으로 표시할 때는 행의 크기 곱하기 열의 크기의 형태로 다음과 같이 나타낸다.
$$ X \in \mathbb{R}^{4\times 6} $$
벡터도 열의 수가 1인 특수한 행렬이기 때문에 벡터의 크기를 표시할 때 행렬 표기에 맞추어 다음과 같이 쓰기도 한다.
$$ x \in \mathbb{R}^{4\times 1} $$
NumPy를 이용하여 행렬을 표기할 때는 2차원 ndarray 객체를 사용한다.
Step2: 특수한 행렬
몇가지 특수한 행렬에 대해서는 별도의 이름이 붙어있다.
행렬에서 행의 숫자와 열의 숫자가 같은 위치를 대각(diagonal)이라고 하고 대각 위치에 있지 않은 것들은 비대각(off-diagonal)이라고 한다. 모든 비대각 요소가 0인 행렬을 대각 행렬(diagonal matrix)이라고 한다.
$$ D \in \mathbb{R}^{N \times N} $$
$$
D =
\begin{bmatrix}
D_{1} & 0 & \cdots & 0 \
0 & D_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & D_{N} \
\end{bmatrix}
$$
NumPy로 대각행렬을 생성하려면 diag 명령을 사용한다.
Step3: 대각 행렬 중에서도 모든 대각 성분의 값이 1인 대각 행렬을 단위 행렬(identity matrix)이라고 한다. 단위 행렬은 보통 알파벳 대문자 $I$로 표기하는 경우가 많다.
$$ I \in \mathbb{R}^{N \times N} $$
$$
I =
\begin{bmatrix}
1 & 0 & \cdots & 0 \
0 & 1 & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & 1 \
\end{bmatrix}
$$
NumPy로 단위행렬을 생성하려면 identity 혹은 eye 명령을 사용한다.
Step4: 연산
행렬의 연산을 이용하면 대량의 데이터에 대한 계산을 간단한 수식으로 나타낼 수 있다. 물론 행렬에 대한 연산은 보통의 숫자 즉, 스칼라에 대한 사칙 연산과는 다른 규칙을 적용하므로 이 규칙을 외워야 한다.
전치 연산
전치(transpose) 연산은 행렬의 행과 열을 바꾸는 연산을 말한다. 벡터 기호에 $T$라는 윗첨자(super-script)를 붙어서 표기한다. 예를 들어 앞에서 보인 $4\times 6$ 차원의 행렬을 전치 연산하면 $6\times 4$ 차원의 행렬이 된다.
$$
X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
\;\;\;
\rightarrow
\;\;\;
X^T =
\begin{bmatrix}
x_{1, 1} & x_{2, 1} & x_{3, 1} & x_{4, 1} & x_{5, 1} & x_{6, 1} \
x_{1, 2} & x_{2, 2} & x_{3, 2} & x_{4, 2} & x_{5, 2} & x_{6, 2} \
x_{1, 3} & x_{2, 3} & x_{3, 3} & x_{4, 3} & x_{5, 3} & x_{6, 3} \
x_{1, 4} & x_{2, 4} & x_{3, 4} & x_{4, 4} & x_{5, 4} & x_{6, 4} \
\end{bmatrix}
$$
벡터도 열의 수가 1인 특수한 행렬이므로 벡터에 대해서도 전치 연산을 적용할 수 있다. 이 때 $x$와 같이 열의 수가 1인 행렬을 열 벡터(column vector)라고 하고 $x^T$와 같이 행의 수가 1인 행렬을 행 벡터(row vector)라고 한다.
$$
x =
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
\; \rightarrow \;
x^T =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
$$
NumPy에서는 ndarray 객체의 T라는 속성을 이용하여 전치 행렬을 구한다. 이 때 T는 메서드(method)가 아닌 속성(attribute)에 유의한다.
Step5: 행렬의 행 표기법과 열 표기법
전치 연산과 행 벡터, 열 벡터를 이용하면 행렬을 다음과 같이 복수의 열 벡터들 $c_i$, 또는 복수의 열 벡터들 $r_j^T$ 을 합친(concatenated) 형태로 표기할 수도 있다.
$$
X
=
\begin{bmatrix}
c_1 & c_2 & \cdots & c_M
\end{bmatrix}
=
\begin{bmatrix}
r_1^T \ r_2^T \ \vdots \ r_N^T
\end{bmatrix}
$$
$$ X \in \mathbb{R}^{N\times M} ,\;\;\; c_i \in R^{N \times 1} \; (i=1,\cdots,M) ,\;\;\; r_j \in R^{1 \times M} \; (j=1,\cdots,N) $$
행렬 덧셈과 뺄셈
행렬의 덧셈과 뺄셈은 같은 크기를 가진 두개의 행렬에 대해 정의되며 각각의 원소에 대해 덧셈과 뺄셈을 하면 된다. 이러한 연산을 element-wise 연산이라고 한다.
Step6: 벡터 곱셈
두 행렬의 곱셈을 정의하기 전에 우선 두 벡터의 곱셈을 알아보자. 벡터의 곱셈에는 내적(inner product)과 외적(outer product) 두 가지가 있다 여기에서는 내적에 대해서만 설명한다. 내적은 dot product라고 하기도 한다.
두 벡터의 곱(내적)이 정의되려면 우선 두 벡터의 길이가 같으며 앞의 벡터가 행 벡터이고 뒤의 벡터가 열 벡터이어야 한다. 이때 두 벡터의 곱은 다음과 같이 각 원소들을 element-by-element로 곱한 다음에 그 값들을 다시 모두 합해서 하나의 스칼라값으로 계산된다.
$$
x^T y =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{N} \
\end{bmatrix}
= x_1 y_1 + \cdots + x_N y_N
= \sum_{i=1}^N x_i y_i
$$
$$ x \in \mathbb{R}^{N \times 1} , \; y \in \mathbb{R}^{N \times 1} \; \rightarrow \; x^T y \in \mathbb{R} $$
벡터의 곱은 왜 이렇게 복잡하게 정의된 것일까. 벡터의 곱을 사용한 예를 몇가지 살펴보자
가중합
가중합(weighted sum)이란 복수의 데이터를 단순히 합하는 것이 아니라 각각의 수에 중요도에 따른 어떤 가중치를 곱한 후 이 값을 합하는 것을 말한다. 만약 데이터가 $x_1, \cdots, x_N$ 이고 가중치가 $w_1, \cdots, w_N$ 이면 가중합은 다음과 같다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i $$
이를 벡터의 곱으로 나타내면 다음과 같이 $w^Tx$ 또는 $x^Tw$ 라는 간단한 수식으로 표시할 수 있다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i =
\begin{bmatrix}
w_{1} && w_{2} && \cdots && w_{N}
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_N
\end{bmatrix}
= w^Tx =
\begin{bmatrix}
x_{1} && x_{2} && \cdots && x_{N}
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= x^Tw $$
NumPy에서 벡터 혹은 이후에 설명할 행렬의 곱은 dot이라는 명령으로 계산한다. 2차원 행렬로 표시한 벡터의 경우에는 결과값이 스칼라가 아닌 2차원 행렬값임에 유의한다.
Step7: 제곱합
데이터 분석시에 분산(variance), 표준 편차(standard deviation)을 구하는 경우에는 각각의 데이터를 제곱한 값을 모두 더하는 계산 즉 제곱합(sum of squares)을 계산하게 된다. 이 경우에도 벡터의 곱을 사용하여 $x^Tx$로 쓸 수 있다.
$$
x^T x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} x_i^2
$$
행렬의 곱셈
벡터의 곱셈을 정의한 후에는 다음과 같이 행렬의 곱셈을 정의할 수 있다.
$A$ 행렬과 $B$ 행렬을 곱한 결과인 $C$ 행렬의 $i$번째 행, $j$번째 열의 원소의 값은 $A$ 행렬의 $i$번째 행 벡터 $a_i^T$와 $B$ 행렬의 $j$번째 열 벡터 $b_j$의 곱으로 계산된 숫자이다.
$$ C = AB \; \rightarrow \; [c_{ij}] = a_i^T b_j $$
이 정의가 성립하려면 앞의 행렬 $A$의 열의 수가 뒤의 행렬 $B$의 행의 수와 일치해야만 한다.
$$ A \in \mathbb{R}^{N \times L} , \; B \in \mathbb{R}^{L \times M} \; \rightarrow \; AB \in \mathbb{R}^{N \times M} $$
Step8: 그럼 이러한 행렬의 곱셈은 데이터 분석에서 어떤 경우에 사용될까. 몇가지 예를 살펴본다.
가중 벡터합
어떤 데이터 레코드 즉, 벡터의 가중합은 $w^Tx$ 또는 $x^Tw$로 표시할 수 있다는 것을 배웠다. 그런데 만약 이렇게 $w$ 가중치를 사용한 가중합을 하나의 벡터 $x$가 아니라 여러개의 벡터 $x_1, \cdots, x_M$개에 대해서 모두 계산해야 한다면 이 계산을 다음과 같이 $Xw$라는 기호로 간단하게 표시할 수 있다.
$$
\begin{bmatrix}
w_1 x_{1,1} + w_2 x_{1,2} + \cdots + w_N x_{1,N} \
w_1 x_{2,1} + w_2 x_{2,2} + \cdots + w_N x_{2,N} \
\vdots \
w_1 x_{M,1} + w_2 x_{M,2} + \cdots + w_N x_{M,N} \
\end{bmatrix}
=
\begin{bmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,N} \
x_{2,1} & x_{2,2} & \cdots & x_{2,N} \
\vdots & \vdots & \vdots & \vdots \
x_{M,1} & x_{M,2} & \cdots & x_{M,N} \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
=
\begin{bmatrix}
x_1^T \
x_2^T \
\vdots \
x_N^T \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= X w
$$
잔차
선형 회귀 분석(linear regression)을 한 결과는 가중치 벡터 $w$라는 형태로 나타나고 예측치는 이 가중치 벡터를 사용한 독립 변수 데이터 레코드 즉, 벡터 $x_i$의 가중합 $w^Tx_i$이 된다. 이 예측치와 실제 값 $y_i$의 차이를 오차(error) 혹은 잔차(residual) $e_i$ 이라고 한다. 이러한 잔차 값을 모든 독립 변수 벡터에 대해 구하면 잔차 벡터 $e$가 된다.
$$ e_i = y_i - w^Tx_i $$
잔차 벡터는 다음과 같이 $y-Xw$로 간단하게 표기할 수 있다.
$$
e =
\begin{bmatrix}
e_{1} \
e_{2} \
\vdots \
e_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
w^T x_{1} \
w^T x_{2} \
\vdots \
w^T x_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1}w \
x^T_{2}w \
\vdots \
x^T_{M}w \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1} \
x^T_{2} \
\vdots \
x^T_{M} \
\end{bmatrix}
w
= y - Xw
$$
$$
e = y - Xw
$$
Step9: 잔차 제곱합
잔차의 크기는 잔차 벡터의 각 원소를 제곱한 후 더한 잔차 제곱합(RSS
Step10: 이차 형식
벡터의 이차 형식(Quadratic Form) 이란 어떤 벡터의 각 원소에 대해 가능한 모든 쌍의 조합 $(x_i, x_j)$을 구한 다음 그 곱셈$x_ix_j$을 더한 것을 말한다. 이 때 각 쌍에 대해 서로 다른 가중치 $a_{i,j}$를 적용하여 $a_{i,j}x_ix_j$의 합을 구한다면 다음과 같이 $x^TAx$라는 간단한 식으로 쓸 수 있다.
$$
x^T A x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,N} \
a_{2,1} & a_{2,2} & \cdots & a_{2,N} \
\vdots & \vdots & \ddots & \vdots \
a_{N,1} & a_{N,2} & \cdots & a_{N,N} \
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} \sum_{j=1}^{N} a_{i,j} x_i x_j
$$
예를 들어 $ x = [1, 2, 3]^T $ 이고 A가 다음과 같다면
$$ A =
\begin{pmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9 \
\end{pmatrix}
$$
NumPy 에서 벡터의 이차 형식은 다음과 같이 계산한다. | Python Code:
x = np.array([1, 2, 3, 4])
x, np.shape(x)
x = np.array([[1], [2], [3], [4]])
x, np.shape(x)
Explanation: NumPy를 활용한 선형대수 입문
선형대수(linear algebra)는 데이터 분석에 필요한 각종 계산을 위한 기본적인 학문이다.
데이터 분석을 하기 위해서는 실제로 수많은 숫자의 계산이 필요하다. 하나의 데이터 레코드(record)가 수십개에서 수천개의 숫자로 이루어져 있을 수도 있고 수십개에서 수백만개의 이러한 데이터 레코드를 조합하여 계산하는 과정이 필요할 수 있다.
선형대수를 사용하는 첫번째 장점은 이러한 데이터 계산의 과정을 아주 단순한 수식으로 서술할 수 있다는 점이다. 그러기 위해서는 선형대수에서 사용되는 여러가지 기호와 개념을 익혀야 한다.
데이터의 유형
선형대수에서 다루는 데이터는 크게 스칼라(scalar), 벡터(vector), 행렬(matrix), 이 세가지 이다.
간단하게 말하자면 스칼라는 숫자 하나로 이루어진 데이터이고 벡터는 여러개의 숫자로 이루어진 데이터 레코드이며 행렬은 벡터, 즉 데이터 레코드가 여러개 있는 데이터 집합이라고 볼 수 있다.
스칼라
스칼라는 하나의 숫자를 말한다. 예를 들어 어떤 붓꽃(iris) 샘플의 꽃잎의 길이를 측정하는 하나의 숫자가 나올 것이다. 이 숫자는 스칼라이다.
스칼라는 보통 $x$와 같이 알파벳 소문자로 표기하며 실수(real number)인 숫자 중의 하나이므로 실수 집합의 원소라는 의미에서 다음과 같이 표기한다.
$$ x \in \mathbb{R} $$
벡터
벡터는 복수개의 숫자가 특정 순서대로 모여 있는 것을 말한다. 사실 대부분의 데이터 분석에서 하나의 데이터 레코드는 여러개의 숫자로 이루어진 경우가 많다. 예를 들어 붓꽃의 종을 알아내기 위해 크기를 측정하게 되면 꽃잎의 길이 $x_1$ 뿐 아니라 꽃잎의 폭 $x_2$ , 꽃받침의 길이 $x_3$ , 꽃받침의 폭 $x_4$ 이라는 4개의 숫자를 측정할 수 있다. 이렇게 측정된 4개의 숫자를 하나의 쌍(tuple) $x$ 로 생각하여 다음과 같이 표기한다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
x_{3} \
x_{4} \
\end{bmatrix}
$$
여기에서 주의할 점은 벡터는 복수의 행(row)을 가지고 하나의 열(column)을 가지는 형태로 위에서 아래로 표기한다는 점이다.
이 때 $x$는 4개의 실수(real number)로 이루어져 있기 때문에 4차원 벡터라고 하고 다음과 같이 4차원임을 표기한다.
$$ x \in \mathbb{R}^4 $$
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 벡터를 feature vector라고 하기도 한다.
만약 4개가 아니라 $N$개의 숫자가 모여 있는 경우의 표기는 다음과 같다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
,\;\;\;\;
x \in \mathbb{R}^N
$$
NumPy를 사용하면 벡터는 1차원 ndarray 객체 혹은 열의 갯수가 1개인 2차원 ndarray 객체로 표현한다. 벡터를 처리하는 프로그램에 따라서는 두 가지 중 특정한 형태만 원하는 경우도 있을 수 있기 때문에 주의해야 한다. 예를 들어 파이썬 scikit-learn 패키지에서는 벡터를 요구하는 경우에 열의 갯수가 1개인 2차원 ndarray 객체를 선호한다.
End of explanation
X = np.array([[11,12,13],[21,22,23]])
X
Explanation: 행렬
행렬은 복수의 차원을 가지는 데이터 레코드가 다시 여러개 있는 경우의 데이터를 합쳐서 표기한 것이다. 예를 들어 앞서 말한 붓꽃의 예에서 6개의 붓꽃에 대해 크기를 측정하였다면 4차원 붓꽃 데이터가 6개가 있다. 즉, $4 \times 6 = 24$개의 실수 숫자가 있는 것이다. 이 숫자 집합을
행렬로 나타내면 다음과 같다. 행렬은 보통 $X$와 같이 알파벳 대문자로 표기한다.
$$X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
$$
행렬 안에서 원소의 위치를 표기할 때는 $x_{2, 3}$ 처럼 두 개의 숫자 쌍을 아랫 첨자(sub-script)로 붙여서 표기한다. 첫번째 숫자가 행(row)을 뜻하고 두번째 숫자가 열(column)을 뜻한다. 예를 들어 $x_{2, 3}$ 는 두번째 행(위에서 아래로 두번째), 세번째 열(왼쪽에서 오른쪽으로 세번째)의 숫자를 뜻한다.
붓꽃의 예에서는 하나의 데이터 레코드가 4차원이였다는 점을 기억하자. 따라서 이 행렬 표기에서는 하나의 행(row)이 붓꽃 하나에 대한 데이터 레코드가 된다.
하나의 데이터 레코드를 나타낼 때는 하나의 열(column)로 나타내고 복수의 데이터 레코드 집합을 나타낼 때는 하나의 데이터 레코드가 하나의 행(row)으로 표기하는 것은 일관성이 없어 보지만 데이터 분석에서 쓰는 일반적인 관례이므로 익히고 있어야 한다.
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 행렬를 feature matrix라고 하기도 한다.
이 행렬의 크기를 수식으로 표시할 때는 행의 크기 곱하기 열의 크기의 형태로 다음과 같이 나타낸다.
$$ X \in \mathbb{R}^{4\times 6} $$
벡터도 열의 수가 1인 특수한 행렬이기 때문에 벡터의 크기를 표시할 때 행렬 표기에 맞추어 다음과 같이 쓰기도 한다.
$$ x \in \mathbb{R}^{4\times 1} $$
NumPy를 이용하여 행렬을 표기할 때는 2차원 ndarray 객체를 사용한다.
End of explanation
np.diag([3, 4, 1])
Explanation: 특수한 행렬
몇가지 특수한 행렬에 대해서는 별도의 이름이 붙어있다.
행렬에서 행의 숫자와 열의 숫자가 같은 위치를 대각(diagonal)이라고 하고 대각 위치에 있지 않은 것들은 비대각(off-diagonal)이라고 한다. 모든 비대각 요소가 0인 행렬을 대각 행렬(diagonal matrix)이라고 한다.
$$ D \in \mathbb{R}^{N \times N} $$
$$
D =
\begin{bmatrix}
D_{1} & 0 & \cdots & 0 \
0 & D_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & D_{N} \
\end{bmatrix}
$$
NumPy로 대각행렬을 생성하려면 diag 명령을 사용한다.
End of explanation
np.identity(3)
np.eye(5)
Explanation: 대각 행렬 중에서도 모든 대각 성분의 값이 1인 대각 행렬을 단위 행렬(identity matrix)이라고 한다. 단위 행렬은 보통 알파벳 대문자 $I$로 표기하는 경우가 많다.
$$ I \in \mathbb{R}^{N \times N} $$
$$
I =
\begin{bmatrix}
1 & 0 & \cdots & 0 \
0 & 1 & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & 1 \
\end{bmatrix}
$$
NumPy로 단위행렬을 생성하려면 identity 혹은 eye 명령을 사용한다.
End of explanation
X = np.array([[11,12,13],[21,22,23]])
X
X.T
Explanation: 연산
행렬의 연산을 이용하면 대량의 데이터에 대한 계산을 간단한 수식으로 나타낼 수 있다. 물론 행렬에 대한 연산은 보통의 숫자 즉, 스칼라에 대한 사칙 연산과는 다른 규칙을 적용하므로 이 규칙을 외워야 한다.
전치 연산
전치(transpose) 연산은 행렬의 행과 열을 바꾸는 연산을 말한다. 벡터 기호에 $T$라는 윗첨자(super-script)를 붙어서 표기한다. 예를 들어 앞에서 보인 $4\times 6$ 차원의 행렬을 전치 연산하면 $6\times 4$ 차원의 행렬이 된다.
$$
X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
\;\;\;
\rightarrow
\;\;\;
X^T =
\begin{bmatrix}
x_{1, 1} & x_{2, 1} & x_{3, 1} & x_{4, 1} & x_{5, 1} & x_{6, 1} \
x_{1, 2} & x_{2, 2} & x_{3, 2} & x_{4, 2} & x_{5, 2} & x_{6, 2} \
x_{1, 3} & x_{2, 3} & x_{3, 3} & x_{4, 3} & x_{5, 3} & x_{6, 3} \
x_{1, 4} & x_{2, 4} & x_{3, 4} & x_{4, 4} & x_{5, 4} & x_{6, 4} \
\end{bmatrix}
$$
벡터도 열의 수가 1인 특수한 행렬이므로 벡터에 대해서도 전치 연산을 적용할 수 있다. 이 때 $x$와 같이 열의 수가 1인 행렬을 열 벡터(column vector)라고 하고 $x^T$와 같이 행의 수가 1인 행렬을 행 벡터(row vector)라고 한다.
$$
x =
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
\; \rightarrow \;
x^T =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
$$
NumPy에서는 ndarray 객체의 T라는 속성을 이용하여 전치 행렬을 구한다. 이 때 T는 메서드(method)가 아닌 속성(attribute)에 유의한다.
End of explanation
x = np.array([10, 11, 12, 13, 14])
x
y = np.array([0, 1, 2, 3, 4])
y
x + y
x - y
Explanation: 행렬의 행 표기법과 열 표기법
전치 연산과 행 벡터, 열 벡터를 이용하면 행렬을 다음과 같이 복수의 열 벡터들 $c_i$, 또는 복수의 열 벡터들 $r_j^T$ 을 합친(concatenated) 형태로 표기할 수도 있다.
$$
X
=
\begin{bmatrix}
c_1 & c_2 & \cdots & c_M
\end{bmatrix}
=
\begin{bmatrix}
r_1^T \ r_2^T \ \vdots \ r_N^T
\end{bmatrix}
$$
$$ X \in \mathbb{R}^{N\times M} ,\;\;\; c_i \in R^{N \times 1} \; (i=1,\cdots,M) ,\;\;\; r_j \in R^{1 \times M} \; (j=1,\cdots,N) $$
행렬 덧셈과 뺄셈
행렬의 덧셈과 뺄셈은 같은 크기를 가진 두개의 행렬에 대해 정의되며 각각의 원소에 대해 덧셈과 뺄셈을 하면 된다. 이러한 연산을 element-wise 연산이라고 한다.
End of explanation
x = np.array([1,2,3])
y = np.array([4,5,6])
np.dot(x,y)
x = np.array([[1], [2], [3]])
y = np.array([[4], [5], [6]])
np.dot(x.T, y)
x, y, x.T
Explanation: 벡터 곱셈
두 행렬의 곱셈을 정의하기 전에 우선 두 벡터의 곱셈을 알아보자. 벡터의 곱셈에는 내적(inner product)과 외적(outer product) 두 가지가 있다 여기에서는 내적에 대해서만 설명한다. 내적은 dot product라고 하기도 한다.
두 벡터의 곱(내적)이 정의되려면 우선 두 벡터의 길이가 같으며 앞의 벡터가 행 벡터이고 뒤의 벡터가 열 벡터이어야 한다. 이때 두 벡터의 곱은 다음과 같이 각 원소들을 element-by-element로 곱한 다음에 그 값들을 다시 모두 합해서 하나의 스칼라값으로 계산된다.
$$
x^T y =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{N} \
\end{bmatrix}
= x_1 y_1 + \cdots + x_N y_N
= \sum_{i=1}^N x_i y_i
$$
$$ x \in \mathbb{R}^{N \times 1} , \; y \in \mathbb{R}^{N \times 1} \; \rightarrow \; x^T y \in \mathbb{R} $$
벡터의 곱은 왜 이렇게 복잡하게 정의된 것일까. 벡터의 곱을 사용한 예를 몇가지 살펴보자
가중합
가중합(weighted sum)이란 복수의 데이터를 단순히 합하는 것이 아니라 각각의 수에 중요도에 따른 어떤 가중치를 곱한 후 이 값을 합하는 것을 말한다. 만약 데이터가 $x_1, \cdots, x_N$ 이고 가중치가 $w_1, \cdots, w_N$ 이면 가중합은 다음과 같다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i $$
이를 벡터의 곱으로 나타내면 다음과 같이 $w^Tx$ 또는 $x^Tw$ 라는 간단한 수식으로 표시할 수 있다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i =
\begin{bmatrix}
w_{1} && w_{2} && \cdots && w_{N}
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_N
\end{bmatrix}
= w^Tx =
\begin{bmatrix}
x_{1} && x_{2} && \cdots && x_{N}
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= x^Tw $$
NumPy에서 벡터 혹은 이후에 설명할 행렬의 곱은 dot이라는 명령으로 계산한다. 2차원 행렬로 표시한 벡터의 경우에는 결과값이 스칼라가 아닌 2차원 행렬값임에 유의한다.
End of explanation
A = np.array([[1, 2, 3], [4, 5, 6]])
B = np.array([[1, 2], [3, 4], [5, 6]])
C = np.dot(A, B)
A
B
C
Explanation: 제곱합
데이터 분석시에 분산(variance), 표준 편차(standard deviation)을 구하는 경우에는 각각의 데이터를 제곱한 값을 모두 더하는 계산 즉 제곱합(sum of squares)을 계산하게 된다. 이 경우에도 벡터의 곱을 사용하여 $x^Tx$로 쓸 수 있다.
$$
x^T x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} x_i^2
$$
행렬의 곱셈
벡터의 곱셈을 정의한 후에는 다음과 같이 행렬의 곱셈을 정의할 수 있다.
$A$ 행렬과 $B$ 행렬을 곱한 결과인 $C$ 행렬의 $i$번째 행, $j$번째 열의 원소의 값은 $A$ 행렬의 $i$번째 행 벡터 $a_i^T$와 $B$ 행렬의 $j$번째 열 벡터 $b_j$의 곱으로 계산된 숫자이다.
$$ C = AB \; \rightarrow \; [c_{ij}] = a_i^T b_j $$
이 정의가 성립하려면 앞의 행렬 $A$의 열의 수가 뒤의 행렬 $B$의 행의 수와 일치해야만 한다.
$$ A \in \mathbb{R}^{N \times L} , \; B \in \mathbb{R}^{L \times M} \; \rightarrow \; AB \in \mathbb{R}^{N \times M} $$
End of explanation
from sklearn.datasets import make_regression
X, y = make_regression(4, 3)
X
y
w = np.linalg.lstsq(X, y)[0]
w
e = y - np.dot(X, w)
e
Explanation: 그럼 이러한 행렬의 곱셈은 데이터 분석에서 어떤 경우에 사용될까. 몇가지 예를 살펴본다.
가중 벡터합
어떤 데이터 레코드 즉, 벡터의 가중합은 $w^Tx$ 또는 $x^Tw$로 표시할 수 있다는 것을 배웠다. 그런데 만약 이렇게 $w$ 가중치를 사용한 가중합을 하나의 벡터 $x$가 아니라 여러개의 벡터 $x_1, \cdots, x_M$개에 대해서 모두 계산해야 한다면 이 계산을 다음과 같이 $Xw$라는 기호로 간단하게 표시할 수 있다.
$$
\begin{bmatrix}
w_1 x_{1,1} + w_2 x_{1,2} + \cdots + w_N x_{1,N} \
w_1 x_{2,1} + w_2 x_{2,2} + \cdots + w_N x_{2,N} \
\vdots \
w_1 x_{M,1} + w_2 x_{M,2} + \cdots + w_N x_{M,N} \
\end{bmatrix}
=
\begin{bmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,N} \
x_{2,1} & x_{2,2} & \cdots & x_{2,N} \
\vdots & \vdots & \vdots & \vdots \
x_{M,1} & x_{M,2} & \cdots & x_{M,N} \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
=
\begin{bmatrix}
x_1^T \
x_2^T \
\vdots \
x_N^T \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= X w
$$
잔차
선형 회귀 분석(linear regression)을 한 결과는 가중치 벡터 $w$라는 형태로 나타나고 예측치는 이 가중치 벡터를 사용한 독립 변수 데이터 레코드 즉, 벡터 $x_i$의 가중합 $w^Tx_i$이 된다. 이 예측치와 실제 값 $y_i$의 차이를 오차(error) 혹은 잔차(residual) $e_i$ 이라고 한다. 이러한 잔차 값을 모든 독립 변수 벡터에 대해 구하면 잔차 벡터 $e$가 된다.
$$ e_i = y_i - w^Tx_i $$
잔차 벡터는 다음과 같이 $y-Xw$로 간단하게 표기할 수 있다.
$$
e =
\begin{bmatrix}
e_{1} \
e_{2} \
\vdots \
e_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
w^T x_{1} \
w^T x_{2} \
\vdots \
w^T x_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1}w \
x^T_{2}w \
\vdots \
x^T_{M}w \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1} \
x^T_{2} \
\vdots \
x^T_{M} \
\end{bmatrix}
w
= y - Xw
$$
$$
e = y - Xw
$$
End of explanation
np.dot(e.T,e)
Explanation: 잔차 제곱합
잔차의 크기는 잔차 벡터의 각 원소를 제곱한 후 더한 잔차 제곱합(RSS: Residual Sum of Squares)를 이용하여 구한다. 이 값은 $e^Te$로 간단하게 쓸 수 있으며 그 값은 다음과 같이 계산한다.
$$
e^Te = \sum_{i=1}^{N} (y_i - w^Tx_i)^2 = (y - Xw)^T (y - Xw)
$$
End of explanation
x = np.array([1,2,3])
x
A = np.arange(1, 10).reshape(3,3)
A
np.dot(x, A)
np.dot(np.dot(x, A), x)
Explanation: 이차 형식
벡터의 이차 형식(Quadratic Form) 이란 어떤 벡터의 각 원소에 대해 가능한 모든 쌍의 조합 $(x_i, x_j)$을 구한 다음 그 곱셈$x_ix_j$을 더한 것을 말한다. 이 때 각 쌍에 대해 서로 다른 가중치 $a_{i,j}$를 적용하여 $a_{i,j}x_ix_j$의 합을 구한다면 다음과 같이 $x^TAx$라는 간단한 식으로 쓸 수 있다.
$$
x^T A x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,N} \
a_{2,1} & a_{2,2} & \cdots & a_{2,N} \
\vdots & \vdots & \ddots & \vdots \
a_{N,1} & a_{N,2} & \cdots & a_{N,N} \
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} \sum_{j=1}^{N} a_{i,j} x_i x_j
$$
예를 들어 $ x = [1, 2, 3]^T $ 이고 A가 다음과 같다면
$$ A =
\begin{pmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9 \
\end{pmatrix}
$$
NumPy 에서 벡터의 이차 형식은 다음과 같이 계산한다.
End of explanation |
15,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
Step1: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE
Step2: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A
Step3: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
Step4: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
Step5: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
Step7: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks
Step8: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
Step10: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
Step11: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image. | Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
def compute_saliency_maps(X, y, model):
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
scores, cache = model.forward(X, mode='test')
dscores = np.ones(scores.shape)
dX, grads = model.backward(dscores, cache)
saliency = np.max(np.abs(dX), axis=1)
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
def make_fooling_image(X, target_y, model):
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
while True:
scores, cache = model.forward(X_fooling, mode='test')
label = np.argmax(scores)
print label, target_y
if label == target_y:
break
dscores = np.zeros(scores.shape)
dscores[np.arange(0, X.shape[0]), target_y] = scores[np.arange(0, X.shape[0]), target_y]
dscores[np.arange(0, X.shape[0]), label] -= scores[np.arange(0, X.shape[0]), label]
dX, grads = model.backward(dscores, cache)
X_fooling += dX * 1e2
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation |
15,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iterative Construction of a Penalised Vine Structure
This notebook iteratively estimate the quantile.
Libraries
Step1: Model function
This example consider the simple additive example.
Step2: Dimension and margins
We first define the problem dimension and the margins
Step3: We chose the coefficients of the variables throught the additive function.
Step4: Estimations
We create an instance of the main class for conservative estimate, and we define a q_func object for the quantile as a quantity of interest | Python Code:
import openturns as ot
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
random_state = 123
np.random.seed(random_state)
Explanation: Iterative Construction of a Penalised Vine Structure
This notebook iteratively estimate the quantile.
Libraries
End of explanation
from depimpact.tests import func_overflow, margins_overflow, var_names_overflow, func_sum
test_func = func_overflow
Explanation: Model function
This example consider the simple additive example.
End of explanation
margins = margins_overflow
dim = len(margins)
families = np.zeros((dim, dim), dtype=int)
for i in range(1, dim):
for j in range(i):
families[i, j] = 1
from depimpact import ConservativeEstimate
ot.RandomGenerator.SetSeed(0)
np.random.seed(0)
K = 100
q_estimate = ConservativeEstimate(test_func, margins, families)
Explanation: Dimension and margins
We first define the problem dimension and the margins
End of explanation
from dependence import iterative_vine_minimize
algorithm_parameters = {
"n_input_sample": 10000,
"n_dep_param_init": 10,
"max_n_pairs": 1,
"grid_type": 'lhs',
"q_func": q_func,
"n_add_pairs": 1,
"n_remove_pairs": 0,
"adapt_vine_structure": True,
"with_bootstrap": False,
"verbose": True,
"iterative_save": False,
"iterative_load": False,
"load_input_samples": False,
"keep_input_samples": False
}
quant_estimate = ConservativeEstimate(model_func=test_func, margins=margins, families=families)
iterative_results = iterative_vine_minimize(estimate_object=quant_estimate, **algorithm_parameters)
from depimpact.dependence_plot import matrix_plot_quantities
matrix_plot_quantities(iterative_results[0], figsize=(18, 15))
# plt.savefig('output/matrix_plot.png')
Explanation: We chose the coefficients of the variables throught the additive function.
End of explanation
from depimpact import ConservativeEstimate, quantile_func
alpha = 0.95
if alpha > 0.5: # Maximizing the quantile
def q_func(x, axis=1):
return - quantile_func(alpha)(x, axis=axis)
else: # Minimizing
q_func = quantile_func(alpha)
from depimpact.utils import get_grid_sample, to_copula_params
from depimpact.dependence_plot import plot_variation, compute_influence
K = 12
n = int(1E6)
pair = [1, 0]
copulas = {'Normal': [1, 1],
'Clayton': [13, 33],
'Gumbel': [4, 24],
'Joe': [6, 26]}
families = np.zeros((dim, dim))
quant_estimate = ConservativeEstimate(model_func=test_func, margins=margins, families=families)
kendalls, output_samples = compute_influence(quant_estimate, K, n, copulas, pair=pair)
ylabel = 'Output quantile at $\\alpha=%.2f$' % (alpha)
plot_area = 'left'
plt_lib = 'seaborn'
plot_variation(output_samples, kendalls, q_func, plot_area, ylabel=ylabel, plt_lib=plt_lib)
plt.savefig('./output/flood_example_variation_quantile_%s_K%d_n_%d_%s.pdf' % (plt_lib,
K, n, plot_area))
plt_lib = 'matplotlib'
plot_variation(output_samples, kendalls, q_func, plot_area, ylabel=ylabel, plt_lib=plt_lib)
plt.savefig('./output/flood_example_variation_quantile_%s_K%d_n_%d_%s.pdf' % (plt_lib,
K, n, plot_area))
plot_area = 'full'
plt_lib = 'seaborn'
plot_variation(output_samples, kendalls, q_func, plot_area, ylabel=ylabel, plt_lib=plt_lib)
plt.savefig('./output/flood_example_variation_quantile_%s_K%d_n_%d_%s.pdf' % (plt_lib,
K, n, plot_area))
plt_lib = 'matplotlib'
plot_variation(output_samples, kendalls, q_func, plot_area, ylabel=ylabel, plt_lib=plt_lib)
plt.savefig('./output/flood_example_variation_quantile_%s_K%d_n_%d_%s.pdf' % (plt_lib,
K, n, plot_area))
Explanation: Estimations
We create an instance of the main class for conservative estimate, and we define a q_func object for the quantile as a quantity of interest
End of explanation |
15,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project
Step3: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
Step5: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
Step7: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https
Step10: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step13: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
Step16: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
Step19: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented
Step22: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
Step25: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
Step27: Train
Implement train to build and train the GANs. Use the following functions you implemented
Step29: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
Step31: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. | Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
# TODO: Implement Function
return None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
def discriminator(images, reuse=False):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
End of explanation
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
# TODO: Build Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation |
15,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution of Jiang et al. 2013
Write a function that takes as input the desired Taxon, and returns the mean value of r.
First, we're going to import the csv module, and read the data. We store the taxon name in the list Taxa, and the corresponding r value in the list r_values. Note that we need to convert the values to float (we need numbers, and they are read as strings).
Step1: Make sure that everything went well
Step2: Now we write a function that, given a list of taxa names and corresponding r values, calculates the mean r for a given category of taxa
Step3: Testing using Fish
Step4: Let's try to run this on all taxa. We can write a little function that returns the set of unique taxa in the database
Step5: Calculate the mean r for each taxon
Step6: You should see that fish have a positive value of r, but that this is also true for other taxa. Is the mean value of r especially high for fish? To test this, compute a p-value by repeatedly sampling 37 values of r (37 experiments on fish are reported in the database) at random, and calculating the probability of observing a higher mean value of r. To get an accurate estimate of the p-value, use 50,000 randomizations.
Are these values of assortative mating high, compared to what expected at random? We can try associating a p-value to each r value by repeatedly computing the mean r value for the taxa, once we scrambled the taxa names! (There are many other ways of doing the same thing, for example counting how many times a certain taxon is represented, and sampling the values at random).
Step7: Let's try the function on Fish
Step8: A very small p-value | Python Code:
import csv
with open('../data/Jiang2013_data.csv') as csvfile:
reader = csv.DictReader(csvfile, delimiter = '\t')
taxa = []
r_values = []
for row in reader:
taxa.append(row['Taxon'])
r_values.append(float(row['r']))
Explanation: Solution of Jiang et al. 2013
Write a function that takes as input the desired Taxon, and returns the mean value of r.
First, we're going to import the csv module, and read the data. We store the taxon name in the list Taxa, and the corresponding r value in the list r_values. Note that we need to convert the values to float (we need numbers, and they are read as strings).
End of explanation
taxa[:5]
r_values[:5]
Explanation: Make sure that everything went well:
End of explanation
def get_mean_r(names, values, target_taxon = 'Fish'):
n = len(names)
mean_r = 0.0
sample_size = 0
for i in range(n):
if names[i] == target_taxon:
mean_r = mean_r + values[i]
sample_size = sample_size + 1
return mean_r / sample_size
Explanation: Now we write a function that, given a list of taxa names and corresponding r values, calculates the mean r for a given category of taxa:
End of explanation
get_mean_r(taxa, r_values, target_taxon = 'Fish')
Explanation: Testing using Fish:
End of explanation
def get_taxa_list(names):
return(set(names))
get_taxa_list(taxa)
Explanation: Let's try to run this on all taxa. We can write a little function that returns the set of unique taxa in the database:
End of explanation
for t in get_taxa_list(taxa):
print(t, get_mean_r(taxa, r_values, target_taxon = t))
Explanation: Calculate the mean r for each taxon:
End of explanation
import scipy
def get_p_value_for_mean_r(names,
values,
target_taxon = 'Fish',
num_simulations = 1000):
# first, compute the observed mean
observed = get_mean_r(names, values, target_taxon)
# now create a copy of the names, to be randomized
rnd_names = names[:]
p_value = 0.0
for i in range(num_simulations):
# shuffle the fake names
scipy.random.shuffle(rnd_names)
tmp = get_mean_r(rnd_names, values, target_taxon)
if tmp >= observed:
p_value = p_value + 1.0
p_value = p_value / num_simulations
return [target_taxon, round(observed, 3), round(p_value, 5)]
Explanation: You should see that fish have a positive value of r, but that this is also true for other taxa. Is the mean value of r especially high for fish? To test this, compute a p-value by repeatedly sampling 37 values of r (37 experiments on fish are reported in the database) at random, and calculating the probability of observing a higher mean value of r. To get an accurate estimate of the p-value, use 50,000 randomizations.
Are these values of assortative mating high, compared to what expected at random? We can try associating a p-value to each r value by repeatedly computing the mean r value for the taxa, once we scrambled the taxa names! (There are many other ways of doing the same thing, for example counting how many times a certain taxon is represented, and sampling the values at random).
End of explanation
get_p_value_for_mean_r(taxa, r_values, 'Fish', 50000)
Explanation: Let's try the function on Fish:
End of explanation
for t in get_taxa_list(taxa):
print(get_p_value_for_mean_r(taxa, r_values, t, 50000))
Explanation: A very small p-value: this means that the observed value (0.397) is larger than what we would expect by chance.
Repeat the procedure for all taxa.
End of explanation |
15,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AutoEncoder and Deep Neural Networks
This scripts reads in the 20 newsgroup corpus from SKLearn. Each document is created to a BoW-vector over the 2000 most common words.
1) Computes a baseline using Naive Bayes and SVM
2) Deep Feed Forward Network
3) AutoEncoder (should work in principle, but it does not yet converge. Check the parameters and the training method).
Step1: Baseline
We use Scikit-learn to derive some baselines for the above setting
Step2: Deep Feed Forward Network
Step3: Autoencoder
This is the code how the autoencoder should work in principle. However, the pretraining seems not to work as the loss stays approx. identical for all epochs. If someone finds the problem, please send me an email. | Python Code:
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
from sklearn import metrics
import random
random.seed(1)
np.random.seed(1)
max_words = 2000
examples_per_labels = 1000
newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'))
newsgroups_test = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes'))
#count_vect = CountVectorizer(stop_words='english', max_features=max_words)
count_vect = TfidfVectorizer(stop_words='english', max_features=max_words)
train_x = count_vect.fit_transform(newsgroups_train.data).toarray()
test_x = count_vect.transform(newsgroups_test.data).toarray()
train_y = newsgroups_train.target
test_y = newsgroups_test.target
nb_labels = max(train_y)+1
print "Train: ",train_x.shape
print "Test: ",test_x.shape
print "%d labels" % nb_labels
examples = []
examples_labels = []
examples_count = {}
for idx in xrange(train_x.shape[0]):
label = train_y[idx]
if label not in examples_count:
examples_count[label] = 0
if examples_count[label] < examples_per_labels:
arr = train_x[idx]
examples.append(arr)
examples_labels.append(label)
examples_count[label]+=1
train_subset_x = np.asarray(examples)
train_subset_y = np.asarray(examples_labels)
print "Train Subset: ",train_subset_x.shape
Explanation: AutoEncoder and Deep Neural Networks
This scripts reads in the 20 newsgroup corpus from SKLearn. Each document is created to a BoW-vector over the 2000 most common words.
1) Computes a baseline using Naive Bayes and SVM
2) Deep Feed Forward Network
3) AutoEncoder (should work in principle, but it does not yet converge. Check the parameters and the training method).
End of explanation
#Naive Bayes
from sklearn.naive_bayes import BernoulliNB
clf = BernoulliNB(alpha=.01)
clf.fit(train_subset_x, train_subset_y)
pred = clf.predict(test_x)
acc = metrics.accuracy_score(test_y, pred)
print "Naive Bayes: %f%%" % (acc*100)
#Gaussian Naive Bayes
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(train_subset_x, train_subset_y)
pred = clf.predict(test_x)
acc = metrics.accuracy_score(test_y, pred)
print "Gaussian Naive Bayes: %f%%" % (acc*100)
#MultinomialNB
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB()
clf.fit(train_subset_x, train_subset_y)
pred = clf.predict(test_x)
acc = metrics.accuracy_score(test_y, pred)
print "Multinomial Naive Bayes: %f%%" % (acc*100)
#LinearSVM
from sklearn import svm
clf = svm.LinearSVC()
clf.fit(train_subset_x, train_subset_y)
pred = clf.predict(test_x)
acc = metrics.accuracy_score(test_y, pred)
print "LinearSVM: %f%%" % (acc*100)
Explanation: Baseline
We use Scikit-learn to derive some baselines for the above setting
End of explanation
from keras.layers import containers
import keras
from keras.models import Sequential
from keras.layers.core import Dense, Flatten, AutoEncoder, Dropout
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.callbacks import EarlyStopping
random.seed(2)
np.random.seed(2)
nb_epoch = 30
batch_size = 200
model = Sequential()
model.add(Dense(500, input_dim=max_words, activation='relu'))
model.add(Dropout(0.5))
#model.add(Dense(500, activation='relu'))
#model.add(Dropout(0.5))
model.add(Dense(nb_labels, activation='softmax'))
train_subset_y_cat = np_utils.to_categorical(train_subset_y, nb_labels)
test_y_cat = np_utils.to_categorical(test_y, nb_labels)
model.compile(loss='categorical_crossentropy', optimizer='adam')
print('Start training')
model.fit(train_subset_x, train_subset_y_cat, batch_size=batch_size, nb_epoch=nb_epoch,
show_accuracy=True, verbose=True, validation_data=(test_x, test_y_cat))
score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=False)
print('Test accuracy:', score[1])
Explanation: Deep Feed Forward Network
End of explanation
# Train the autoencoder
# Source: https://github.com/fchollet/keras/issues/358
from keras.layers import containers
import keras
from keras.models import Sequential
from keras.layers.core import Dense, Flatten, AutoEncoder, Dropout
from keras.optimizers import SGD
from keras.utils import np_utils
random.seed(3)
np.random.seed(3)
nb_epoch = 30
batch_size = 200
nb_epoch_pretraining = 10
batch_size_pretraining = 1000
# Layer-wise pretraining
encoders = []
decoders = []
nb_hidden_layers = [max_words, 1000]
X_train_tmp = np.copy(train_x)
for i, (n_in, n_out) in enumerate(zip(nb_hidden_layers[:-1], nb_hidden_layers[1:]), start=1):
print('Training the layer {}: Input {} -> Output {}'.format(i, n_in, n_out))
# Create AE and training
ae = Sequential()
encoder = containers.Sequential([Dense(output_dim=n_out, input_dim=n_in, activation='tanh'), Dropout(0.3)])
decoder = containers.Sequential([Dense(output_dim=n_in, input_dim=n_out, activation='tanh')])
ae.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=False))
sgd = SGD(lr=2, decay=1e-6, momentum=0.0, nesterov=True)
ae.compile(loss='mean_squared_error', optimizer='Adam')
ae.fit(X_train_tmp, X_train_tmp, batch_size=batch_size_pretraining, nb_epoch=nb_epoch_pretraining, verbose = True, shuffle=True)
# Store trainined weight and update training data
encoders.append(ae.layers[0].encoder)
decoders.append(ae.layers[0].decoder)
X_train_tmp = ae.predict(X_train_tmp)
#End to End Autoencoder training
if len(nb_hidden_layers) > 2:
full_encoder = containers.Sequential()
for encoder in encoders:
full_encoder.add(encoder)
full_decoder = containers.Sequential()
for decoder in reversed(decoders):
full_decoder.add(decoder)
full_ae = Sequential()
full_ae.add(AutoEncoder(encoder=full_encoder, decoder=full_decoder, output_reconstruction=False))
full_ae.compile(loss='mean_squared_error', optimizer='Adam')
print "Pretraining of full AE"
full_ae.fit(train_x, train_x, batch_size=batch_size_pretraining, nb_epoch=nb_epoch_pretraining, verbose = True, shuffle=True)
# Fine-turning
model = Sequential()
for encoder in encoders:
model.add(encoder)
model.add(Dense(output_dim=nb_labels, input_dim=nb_hidden_layers[-1], activation='softmax'))
train_subset_y_cat = np_utils.to_categorical(train_subset_y, nb_labels)
test_y_cat = np_utils.to_categorical(test_y, nb_labels)
model.compile(loss='categorical_crossentropy', optimizer='Adam')
score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0)
print('Test score before fine turning:', score[0])
print('Test accuracy before fine turning:', score[1])
model.fit(train_subset_x, train_subset_y_cat, batch_size=batch_size, nb_epoch=nb_epoch,
show_accuracy=True, validation_data=(test_x, test_y_cat), shuffle=True)
score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0)
print('Test score after fine turning:', score[0])
print('Test accuracy after fine turning:', score[1])
Explanation: Autoencoder
This is the code how the autoencoder should work in principle. However, the pretraining seems not to work as the loss stays approx. identical for all epochs. If someone finds the problem, please send me an email.
End of explanation |
15,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
Step1: What was the most popular type of complaint, and how many times was it filed?
Step2: Make a horizontal bar graph of the top 5 most frequent complaint types.
Step3: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
Step4: According to your selection of data, how many cases were filed in March? How about May?
Step5: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
Step6: What was the most popular type of complaint on April 1st?
Step7: What were the most popular three types of complaint on April 1st
Step8: What month has the most reports filed? How many? Graph it.
Step9: What week of the year has the most reports filed? How many? Graph the weekly complaints.
Step10: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
Step11: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
Step12: What hour of the day are the most complaints? Graph a day of complaints.
Step13: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
Step14: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
Step15: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
Step16: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
Step17: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer. | Python Code:
df=pd.read_csv("311-2014.csv", nrows=200000)
dateutil.parser.parse(df['Created Date'][0])
def parse_date(str_date):
return dateutil.parser.parse(str_date)
df['created_datetime']=df['Created Date'].apply(parse_date)
df.index=df['created_datetime']
Explanation: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
End of explanation
df['Complaint Type'].describe()
Explanation: What was the most popular type of complaint, and how many times was it filed?
End of explanation
df.groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5).plot(kind='barh').invert_yaxis()
Explanation: Make a horizontal bar graph of the top 5 most frequent complaint types.
End of explanation
df.groupby(by='Borough')['Borough'].count()
boro_pop={
'BRONX': 1438159,
'BROOKLYN': 2621793,
'MANHATTAN': 1636268,
'QUEENS': 2321580,
'STATEN ISLAND': 473279}
boro_df=pd.Series.to_frame(df.groupby(by='Borough')['Borough'].count())
boro_df['Population']=pd.DataFrame.from_dict(boro_pop, orient='index')
boro_df['Complaints']=boro_df['Borough']
boro_df.drop('Borough', axis=1, inplace=True)
boro_df['Per Capita']=boro_df['Complaints']/boro_df['Population']
boro_df['Per Capita'].plot(kind='bar')
Explanation: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
End of explanation
df['2015-03']['Created Date'].count()
df['2015-05']['Created Date'].count()
Explanation: According to your selection of data, how many cases were filed in March? How about May?
End of explanation
df['2015-04-01']
Explanation: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
End of explanation
df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(1)
Explanation: What was the most popular type of complaint on April 1st?
End of explanation
df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(3)
Explanation: What were the most popular three types of complaint on April 1st
End of explanation
df.resample('M')['Unique Key'].count().sort_values(ascending=False)
df.resample('M').count().plot(y='Unique Key')
Explanation: What month has the most reports filed? How many? Graph it.
End of explanation
df.resample('W')['Unique Key'].count().sort_values(ascending=False).head(5)
df.resample('W').count().plot(y='Unique Key')
Explanation: What week of the year has the most reports filed? How many? Graph the weekly complaints.
End of explanation
noise_df=df[df['Complaint Type'].str.contains('Noise')]
noise_df.resample('M').count().plot(y='Unique Key')
noise_df.groupby(by=noise_df.index.hour).count().plot(y='Unique Key')
Explanation: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
End of explanation
df.resample('D')['Unique Key'].count().sort_values(ascending=False).head(5)
df.resample('D')['Unique Key'].count().sort_values().tail(5).plot(kind='barh')
Explanation: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
End of explanation
df['Unique Key'].groupby(by=df.index.hour).count().sort_values(ascending=False)
df['Unique Key'].groupby(df.index.hour).count().plot()
Explanation: What hour of the day are the most complaints? Graph a day of complaints.
End of explanation
df[df.index.hour==0].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
df[df.index.hour==1].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
df[df.index.hour==11].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
Explanation: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
End of explanation
midnight_df = df[df.index.hour==0]
midnight_df.groupby(midnight_df.index.minute)['Unique Key'].count().sort_values(ascending=False)
Explanation: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
End of explanation
df.groupby('Agency')['Unique Key'].count().sort_values(ascending=False).head(5)
ax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.hour)['Unique Key'].count().plot(legend=True, label='NYPD')
df[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='HPD')
df[df['Agency']=='DOT'].groupby(df[df['Agency']=='DOT'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DOT')
df[df['Agency']=='DPR'].groupby(df[df['Agency']=='DPR'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DPR')
df[df['Agency']=='DOHMH'].groupby(df[df['Agency']=='DOHMH'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DOHMH')
Explanation: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
End of explanation
ax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.week)['Unique Key'].count().plot(legend=True, label='NYPD')
df[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='HPD')
df[df['Agency']=='DOT'].groupby(df[df['Agency']=='DOT'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DOT')
df[df['Agency']=='DPR'].groupby(df[df['Agency']=='DPR'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DPR')
df[df['Agency']=='DOHMH'].groupby(df[df['Agency']=='DOHMH'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DOHMH')
Explanation: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
End of explanation
nypd=df[df['Agency']=='NYPD']
nypd[(nypd.index.month==7) | (nypd.index.month==8)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
nypd[nypd.index.month==5].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
# seems like mostly noise complaints and bad parking to me
hpd=df[df['Agency']=='HPD']
hpd[(hpd.index.month>=6) & (hpd.index.month<=8)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
# i would consider summer to be june to august.
hpd[(hpd.index.month==12) | (hpd.index.month<=2)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
# pretty similar list, but people probably notice a draft from their bad window or door in the winter more easily than summer
Explanation: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
End of explanation |
15,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-worker training with Keras
Learning Objectives
Multi-worker Configuration
Choose the right strategy
Train the model
Multi worker training in depth
Introduction
This notebook demonstrates multi-worker distributed training with Keras model using tf.distribute.Strategy API, specifically tf.distribute.MultiWorkerMirroredStrategy. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Setup
First, some necessary imports.
Step1: Before importing TensorFlow, make a few changes to the environment.
Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
Step2: Reset the TF_CONFIG environment variable, you'll see more about this later.
Step3: Be sure that the current directory is on python's path. This allows the notebook to import the files written by %%writefile later.
Step4: Now import TensorFlow.
Step5: Dataset and model definition
Next create an mnist.py file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial
Step6: Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
Step7: Multi-worker Configuration
Now let's enter the world of multi-worker training. In TensorFlow, the TF_CONFIG environment variable is required for training on multiple machines, each of which possibly has a different role. TF_CONFIG is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.
Here is an example configuration
Step8: Here is the same TF_CONFIG serialized as a JSON string
Step9: There are two components of TF_CONFIG
Step10: You can access the environment variable from a subprocesses
Step11: In the next section, you'll use this to pass the TF_CONFIG to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial
Step12: Note
Step13: Note
Step14: In the code snippet above note that the global_batch_size, which gets passed to Dataset.batch, is set to per_worker_batch_size * num_workers. This ensures that each worker processes batches of per_worker_batch_size examples regardless of the number of workers.
The current directory now contains both Python files
Step15: So json-serialize the TF_CONFIG and add it to the environment variables
Step16: Now, you can launch a worker process that will run the main.py and use the TF_CONFIG
Step17: There are a few things to note about the above command
Step18: Now look what's been output to the worker's logfile so far
Step19: The last line of the log file should say
Step20: Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process)
Step21: Now if you recheck the logs written by the first worker you'll see that it participated in training that model
Step22: Unsurprisingly this ran slower than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
Step23: Multi worker training in depth
So far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases.
Dataset sharding
In multi-worker training, dataset sharding is needed to ensure convergence and performance.
The example in the previous section relies on the default autosharding provided by the tf.distribute.Strategy API. You can control the sharding by setting the tf.data.experimental.AutoShardPolicy of the tf.data.experimental.DistributeOptions. To learn more about auto-sharding see the Distributed input guide.
Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended)
Step24: Evaluation
If you pass validation_data into model.fit, it will alternate between training and evaluation for each epoch. The evaluation taking validation_data is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set validation_steps. A repeated dataset is also recommended for evaluation.
Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted.
Performance
You now have a Keras model that is all set up to run in multiple workers with MultiWorkerMirroredStrategy. You can try the following techniques to tweak performance of multi-worker training with MultiWorkerMirroredStrategy.
MultiWorkerMirroredStrategy provides multiple collective communication implementations. RING implements ring-based collectives using gRPC as the cross-host communication layer. NCCL uses Nvidia's NCCL to implement collectives. AUTO defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify communication_options parameter of MultiWorkerMirroredStrategy's constructor, e.g. communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL).
Cast the variables to tf.float if possible. The official ResNet model includes an example of how this can be done.
Fault tolerance
In synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with tf.distribute.Strategy comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.
When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.
Note
Step25: With that, you're now ready to save
Step26: As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved
Step27: Now, when it's time to load, let's use convenient tf.keras.models.load_model API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call tf.keras.models.load_model within another strategy.scope().
Step28: Checkpoint saving and restoring
On the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one tf.train.Checkpoint that tracks the model, which is managed by a tf.train.CheckpointManager so that only the latest checkpoint is preserved.
Step29: Once the CheckpointManager is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
Step30: Now, when you need to restore, you can find the latest checkpoint saved using the convenient tf.train.latest_checkpoint function. After restoring the checkpoint, you can continue with training.
Step31: BackupAndRestore callback
BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under backup_dir argument to BackupAndRestore. This is done at the end of each epoch.
Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.
To use it, provide an instance of tf.keras.callbacks.experimental.BackupAndRestore at the tf.keras.Model.fit() call.
With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.
BackupAndRestore callback uses CheckpointManager to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, backup_dir should not be re-used to store other checkpoints in order to avoid name collision.
Currently, BackupAndRestore callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.
Below are two examples for both multi-worker training and single worker training. | Python Code:
import json
import os
import sys
Explanation: Multi-worker training with Keras
Learning Objectives
Multi-worker Configuration
Choose the right strategy
Train the model
Multi worker training in depth
Introduction
This notebook demonstrates multi-worker distributed training with Keras model using tf.distribute.Strategy API, specifically tf.distribute.MultiWorkerMirroredStrategy. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Setup
First, some necessary imports.
End of explanation
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
Explanation: Before importing TensorFlow, make a few changes to the environment.
Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
End of explanation
os.environ.pop('TF_CONFIG', None)
Explanation: Reset the TF_CONFIG environment variable, you'll see more about this later.
End of explanation
if '.' not in sys.path:
sys.path.insert(0, '.')
Explanation: Be sure that the current directory is on python's path. This allows the notebook to import the files written by %%writefile later.
End of explanation
import tensorflow as tf
print(tf.__version__)
Explanation: Now import TensorFlow.
End of explanation
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
Explanation: Dataset and model definition
Next create an mnist.py file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
End of explanation
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
Explanation: Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
End of explanation
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
Explanation: Multi-worker Configuration
Now let's enter the world of multi-worker training. In TensorFlow, the TF_CONFIG environment variable is required for training on multiple machines, each of which possibly has a different role. TF_CONFIG is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.
Here is an example configuration:
End of explanation
# converts a Python object into a json string.
# TODO
json.dumps(tf_config)
Explanation: Here is the same TF_CONFIG serialized as a JSON string:
End of explanation
os.environ['GREETINGS'] = 'Hello TensorFlow!'
Explanation: There are two components of TF_CONFIG: cluster and task.
cluster is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as worker. In multi-worker training with MultiWorkerMirroredStrategy, there is usually one worker that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular worker does. Such a worker is referred to as the chief worker, and it is customary that the worker with index 0 is appointed as the chief worker (in fact this is how tf.distribute.Strategy is implemented).
task provides information of the current task and is different on each worker. It specifies the type and index of that worker.
In this example, you set the task type to "worker" and the task index to 0. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the TF_CONFIG environment variable set as well, and it should have the same cluster dict, but different task type or task index depending on what the roles of those machines are.
For illustration purposes, this tutorial shows how one may set a TF_CONFIG with 2 workers on localhost. In practice, users would create multiple workers on external IP addresses/ports, and set TF_CONFIG on each worker appropriately.
In this example you will use 2 workers, the first worker's TF_CONFIG is shown above. For the second worker you would set tf_config['task']['index']=1
Above, tf_config is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the TF_CONFIG environment variable.
Environment variables and subprocesses in notebooks
Subprocesses inherit environment variables from their parent. So if you set an environment variable in this jupyter notebook process:
End of explanation
%%bash
echo ${GREETINGS}
Explanation: You can access the environment variable from a subprocesses:
End of explanation
# A distribution strategy for synchronous training on multiple workers.
# TODO
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
Explanation: In the next section, you'll use this to pass the TF_CONFIG to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example.
Choose the right strategy
In TensorFlow there are two main forms of distributed training:
Synchronous training, where the steps of training are synced across the workers and replicas, and
Asynchronous training, where the training steps are not strictly synced.
MultiWorkerMirroredStrategy, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.
To train the model, use an instance of tf.distribute.MultiWorkerMirroredStrategy.
MultiWorkerMirroredStrategy creates copies of all variables in the model's layers on each device across all workers. It uses CollectiveOps, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The tf.distribute.Strategy guide has more details about this strategy.
End of explanation
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
# TODO
multi_worker_model = mnist.build_and_compile_cnn_model()
Explanation: Note: TF_CONFIG is parsed and TensorFlow's GRPC servers are started at the time MultiWorkerMirroredStrategy() is called, so the TF_CONFIG environment variable must be set before a tf.distribute.Strategy instance is created. Since TF_CONFIG is not set yet the above strategy is effectively single-worker training.
MultiWorkerMirroredStrategy provides multiple implementations via the CommunicationOptions parameter. RING implements ring-based collectives using gRPC as the cross-host communication layer. NCCL uses Nvidia's NCCL to implement collectives. AUTO defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster.
Train the model
With the integration of tf.distribute.Strategy API into tf.keras, the only change you will make to distribute the training to multiple-workers is enclosing the model building and model.compile() call inside strategy.scope(). The distribution strategy's scope dictates how and where the variables are created, and in the case of MultiWorkerMirroredStrategy, the variables created are MirroredVariables, and they are replicated on each of the workers.
End of explanation
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
Explanation: Note: Currently there is a limitation in MultiWorkerMirroredStrategy where TensorFlow ops need to be created after the instance of strategy is created. If you see RuntimeError: Collective ops must be configured at program startup, try creating the instance of MultiWorkerMirroredStrategy at the beginning of the program and put the code that may create ops after the strategy is instantiated.
To actually run with MultiWorkerMirroredStrategy you'll need to run worker processes and pass a TF_CONFIG to them.
Like the mnist.py file written earlier, here is the main.py that each of the workers will run:
End of explanation
%%bash
ls *.py
Explanation: In the code snippet above note that the global_batch_size, which gets passed to Dataset.batch, is set to per_worker_batch_size * num_workers. This ensures that each worker processes batches of per_worker_batch_size examples regardless of the number of workers.
The current directory now contains both Python files:
End of explanation
os.environ['TF_CONFIG'] = json.dumps(tf_config)
Explanation: So json-serialize the TF_CONFIG and add it to the environment variables:
End of explanation
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
Explanation: Now, you can launch a worker process that will run the main.py and use the TF_CONFIG:
End of explanation
import time
time.sleep(10)
Explanation: There are a few things to note about the above command:
It uses the %%bash which is a notebook "magic" to run some bash commands.
It uses the --bg flag to run the bash process in the background, because this worker will not terminate. It waits for all the workers before it starts.
The backgrounded worker process won't print output to this notebook, so the &> redirects its output to a file, so you can see what happened.
So, wait a few seconds for the process to start up:
End of explanation
%%bash
cat job_0.log
Explanation: Now look what's been output to the worker's logfile so far:
End of explanation
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
Explanation: The last line of the log file should say: Started server with target: grpc://localhost:12345. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed.
So update the tf_config for the second worker's process to pick up:
End of explanation
%%bash
python main.py
Explanation: Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
End of explanation
%%bash
cat job_0.log
Explanation: Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
End of explanation
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
Explanation: Unsurprisingly this ran slower than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
End of explanation
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
Explanation: Multi worker training in depth
So far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases.
Dataset sharding
In multi-worker training, dataset sharding is needed to ensure convergence and performance.
The example in the previous section relies on the default autosharding provided by the tf.distribute.Strategy API. You can control the sharding by setting the tf.data.experimental.AutoShardPolicy of the tf.data.experimental.DistributeOptions. To learn more about auto-sharding see the Distributed input guide.
Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
End of explanation
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this colab section, we also add `task_type is None`
# case because it is effectively run with only single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
Explanation: Evaluation
If you pass validation_data into model.fit, it will alternate between training and evaluation for each epoch. The evaluation taking validation_data is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set validation_steps. A repeated dataset is also recommended for evaluation.
Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted.
Performance
You now have a Keras model that is all set up to run in multiple workers with MultiWorkerMirroredStrategy. You can try the following techniques to tweak performance of multi-worker training with MultiWorkerMirroredStrategy.
MultiWorkerMirroredStrategy provides multiple collective communication implementations. RING implements ring-based collectives using gRPC as the cross-host communication layer. NCCL uses Nvidia's NCCL to implement collectives. AUTO defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify communication_options parameter of MultiWorkerMirroredStrategy's constructor, e.g. communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL).
Cast the variables to tf.float if possible. The official ResNet model includes an example of how this can be done.
Fault tolerance
In synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with tf.distribute.Strategy comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.
When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.
Note:
Previously, the ModelCheckpoint callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new BackupAndRestore callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing ModelCheckpoint callback. From now on, applications that rely on this behavior should migrate to the new callback.
ModelCheckpoint callback
ModelCheckpoint callback no longer provides fault tolerance functionality, please use BackupAndRestore callback instead.
The ModelCheckpoint callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.
Optionally the user can choose to save and restore model/weights outside ModelCheckpoint callback.
Model saving and loading
To save your model using model.save or tf.saved_model.save, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.
The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.
With MultiWorkerMirroredStrategy, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes task_type and task_id. task_type tells you what the current job is (e.g. 'worker'), and task_id tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.
In the code snippet below, write_filepath provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
End of explanation
multi_worker_model.save(write_model_path)
Explanation: With that, you're now ready to save:
End of explanation
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
Explanation: As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
End of explanation
# load a model saved via model.save()
# TODO
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
Explanation: Now, when it's time to load, let's use convenient tf.keras.models.load_model API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call tf.keras.models.load_model within another strategy.scope().
End of explanation
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
Explanation: Checkpoint saving and restoring
On the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one tf.train.Checkpoint that tracks the model, which is managed by a tf.train.CheckpointManager so that only the latest checkpoint is preserved.
End of explanation
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
Explanation: Once the CheckpointManager is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
End of explanation
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
Explanation: Now, when you need to restore, you can find the latest checkpoint saved using the convenient tf.train.latest_checkpoint function. After restoring the checkpoint, you can continue with training.
End of explanation
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
Explanation: BackupAndRestore callback
BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under backup_dir argument to BackupAndRestore. This is done at the end of each epoch.
Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.
To use it, provide an instance of tf.keras.callbacks.experimental.BackupAndRestore at the tf.keras.Model.fit() call.
With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.
BackupAndRestore callback uses CheckpointManager to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, backup_dir should not be re-used to store other checkpoints in order to avoid name collision.
Currently, BackupAndRestore callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.
Below are two examples for both multi-worker training and single worker training.
End of explanation |
15,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'mri-agcm3-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MRI
Source ID: MRI-AGCM3-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:18
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
15,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Golden Spirals
Steve loves nature. He likes looking at the pretty patterns in nature like the one below.
The spiral in the picture above have a special name - it is called a Golden Spiral. The Golden Spiral is found everywhere in nature - in flowers, in pinecones, in hurricanes, in galaxies, DNA molecules and many more.
The Golden Spiral is related to a special number in math called the Golden Ratio = 1.618. In this adventure we will build natural looking spirals using the Golden Ratio that makes Steve happy as he explores the Minecraft world.
Fibonacci series
Lets look at a few number series
Step1: Verify that your Fibonacci series is correct by using the function fib_list
python
fib_list(10)
Step2: Task 3
Step3: Task 4
Step4: Task 5 | Python Code:
# generate a list of Fibonacci series starting with 1
import itertools
import math
import numpy as np
def fib_function():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
def fib_list(n):
fib = fib_function()
return list(itertools.islice(fib,n+2))[2:]
Explanation: Golden Spirals
Steve loves nature. He likes looking at the pretty patterns in nature like the one below.
The spiral in the picture above have a special name - it is called a Golden Spiral. The Golden Spiral is found everywhere in nature - in flowers, in pinecones, in hurricanes, in galaxies, DNA molecules and many more.
The Golden Spiral is related to a special number in math called the Golden Ratio = 1.618. In this adventure we will build natural looking spirals using the Golden Ratio that makes Steve happy as he explores the Minecraft world.
Fibonacci series
Lets look at a few number series:
The series of natural numbers is - 1,2,3,4,5,...
The series of even numbers is - 2,4,6,8,...
The series of odd numbers is - 1,3,5,7,...
There is a special sequence of numbers called the Fibonacci Series that helps us calculate the golden ratio.
The series of Fibonacci series = 1,1,2,3,5,8,...
It turns out that the Fibonacci numbers approximate the Golden Ratio.
Task 1: Calculating Fibonacci series
Your first task is to calculate the first 10 numbers in the Fibonacci series
Fibonacci series = 1,1,2,3,5,8,...
Fill in the rest of the series below:
1, 1, 2, __ , __ , __ , __ , __ , __ , __ , __
Task 2: Verify the Fibonacci series in Task 1
Run the code block below that defineds a function fib_list for the Fibonacci series.
End of explanation
# Task 2 program
Explanation: Verify that your Fibonacci series is correct by using the function fib_list
python
fib_list(10)
End of explanation
# return a range if the numbers are different
def ordered_range(a,b):
if (a == b):
return [a]
elif (a < b):
return range(a,b+1)
else:
return range(a,b-1,-1)
# replace None with previous value of coordinate
def replace_null(acc,p):
null_replacer = lambda (a,b): b if (a is None) else a
if not acc:
return [p]
else:
previous_point = acc[-1]
next_point = map(null_replacer, zip(p,previous_point))
return acc + [next_point]
# a point p has (x,y,z) coordinates
def line_coordinates(p1,p2):
x1,y1,z1 = p1[0],p1[1],p1[2]
x2,y2,z2 = p2[0],p2[1],p2[2]
points = reduce(replace_null,itertools.izip_longest(ordered_range(x1,x2),ordered_range(y1,y2),ordered_range(z1,z2)),[])
return points
# generate the sequence [(1,0),(0,1),(-1,0),(0,-1),(1,0),...]
def next_mult((m,n)):
return (-1*n,m)
# return a new sequence = previous sequence + new segment
# and new start position for next segment new_start
def fib_segment((prev_sequence,(xm,zm)),segment_length):
start = prev_sequence[-1]
x1,y1,z1 = start[0],start[1],start[2]
x2,y2,z2 = x1 + (xm * segment_length), y1, z1 + (zm * segment_length)
new_segment = line_coordinates((x1,y1,z1),(x2,y2,z2))[1:]
new_sequence = prev_sequence + new_segment
return (new_sequence,next_mult((xm,zm)))
# fibonacci coordinates
# alternating x,z using next_mult
def fib_coords(start, n):
fib_series = fib_list(n)
fib_series.reverse()
fib_points = reduce(fib_segment,fib_series,([start],(1,0)))
return fib_points[0]
#logarithmic spiral functions
def log_spiral_xy(theta, theta_transform):
a = 1
b = 0.3
x = a*math.exp(b*theta)*math.cos(theta_transform(theta))
z = a*math.exp(b*theta)*math.sin(theta_transform(theta))
return (x,z)
def theta_to_xyz((x,y,z),t, theta_transform):
x1,z1 = log_spiral_xy(t, theta_transform)
return (x1,y,z1)
# log spiral coordinates
def log_sequence((x,y,z),rads, theta_transform = lambda t: t):
logseq = map(lambda t: tuple(map(lambda x, y: int(round(x + y)), (x,y,z), theta_to_xyz((x,y,z),t, theta_transform))) ,rads)
loglines = reduce(lambda acc,s: acc + line_coordinates(s[0],s[1]), zip(logseq,logseq[1:]),[])
return loglines
# Minecraft initialization
import sys
#sys.path.append('/Users/esumitra/workspaces/mc/mcpipy')
import mcpi.minecraft as minecraft
import mcpi.block as block
import time
mc = minecraft.Minecraft.create()
# build in Minecraft
# common minecraft builder function
def mc_builder(blockid, point_function):
pos = mc.player.getTilePos()
start_point = (pos.x,pos.y,pos.z)
mc.postToChat("building in 5 seconds ...")
time.sleep(4)
mc.postToChat("building in 1 seconds ...")
time.sleep(1)
points = point_function(start_point)
for p in points:
mc.setBlock(p[0], p[1], p[2], blockid)
# builder for fibonacci
def build_fib(blockid,n=4):
mc_builder(blockid, lambda (x,y,z): fib_coords((x,y,z),n))
# builder for spiral with 1 arm
def build_spiral_arm(blockid, rads = np.arange(0,2*math.pi,0.25)):
mc_builder(blockid, lambda (x,y,z): log_sequence((x,y,z),rads))
# builder for spiral with 7 arms
def build_spiral(blockid, rads = np.arange(0,2*math.pi*2,0.1)):
spiral_points = (lambda p0:
reduce(lambda acc,x: acc + log_sequence(p0,rads,lambda t: t + x),
np.arange(0,2*math.pi,math.pi/3.0),
[]))
mc_builder(blockid, spiral_points)
def build_reverse_spiral(blockid, rads = np.arange(0,2*math.pi*2,0.1)):
spiral_points = (lambda p0:
reduce(lambda acc,x: acc + log_sequence(p0,rads,lambda t: -1.0 * (t + x)),
np.arange(0,2*math.pi,math.pi/3.0),
[]))
mc_builder(blockid, spiral_points)
Explanation: Task 3: Functions to build Fibonacci and Log Spirals
Nice job on calculating the Fibonacci series. Lets define some more functions to build sprials using the Fibonacci series and the Spirals from the picture.
Just run all the cells below until you reach Task 4.
End of explanation
# Task 4
Explanation: Task 4: Fibonacci and Log Spirals
Ready for some fun? Find a location in Minecraft where you want to generate a spiral and run the programs below. Use the program build_fib to build a Fibonacci spiral arm and build_spiral_arm to build a golden spiral arm.
python
build_fib(block.DIAMOND_BLOCK.id)
build_spiral_arm(block.STONE.id)
Trying using different numbers and building blocks in the building functions.
End of explanation
# Task 5
Explanation: Task 5: Spiral Landscaping
For your last task, lets shape the landscape using the build_spiral and build_reverse_spiral functions with different blocks. Try using natural blocks like block.FLOWER_YELLOW.id or block.TORCH.id at night in you program. Move to different locations in Minecraft and run the function to build a spiral at that location.
python
build_spiral(block.FLOWER_YELLOW.id)
Have fun!
End of explanation |
15,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inference Wrappers use cases
This is an example of the PySAL segregation framework to perform inference on a single value and comparative inference using simulations under the null hypothesis. Once the segregation classes are fitted, the user can perform inference to shed light for statistical significance in regional analysis. Currently, it is possible to make inference for a single measure or for two values of the same measure.
The summary of the inference wrappers is presented in the following Table
Step1: Then it's time to load some data to estimate segregation. We use the data of 2000 Census Tract Data for the metropolitan area of Sacramento, CA, USA.
We use a geopandas dataframe available in PySAL examples repository.
For more information about the data
Step2: We also can plot the spatial distribution of the composition of the Hispanic population over the tracts of Sacramento
Step3: Single Value
Dissimilarity
The SingleValueTest function expect to receive a pre-fitted segregation class and then it uses the underlying data to iterate over the null hypothesis and comparing the results with point estimation of the index. Thus, we need to firstly estimate some measure. We can fit the classic Dissimilarity index
Step4: The question that may rise is "Is this value of 0.32 statistically significant under some pre-specified circumstance?". To answer this, it is possible to rely on the Infer_Segregation function to generate several values of the same index (in this case the Dissimilarity Index) under the hypothesis and compare them with the one estimated by the dataset of Sacramento. To generate 1000 values assuming evenness, you can run
Step5: This class has a quick plotting method to inspect the generated distribution with the estimated value from the sample (vertical red line)
Step6: It is possible to see that clearly the value of 0.3218 is far-right in the distribution indicating that the hispanic group is, indeed, significantly segregated in terms of the Dissimilarity index under evenness. You can also check the mean value of the distribution using the est_sim attribute which represents all the D draw from the simulations
Step7: The two-tailed p-value of the following hypothesis test
Step8: Therefore, we can conclude that Sacramento is statistically segregated at 5% of significance level (p.value < 5%) in terms of D.
You can also test under different approaches for the null hypothesis
Step9: The conclusions are analogous as the evenness approach.
Relative Concentration
The Infer_Segregation wrapper can handle any class of the PySAL segregation module. It is possible to use it in the Relative Concentration (RCO) segregation index
Step10: Since RCO is an spatial index (i.e. depends on the spatial context), it makes sense to use the permutation null approach. This approach relies on randomly allocating the sample values over the spatial units and recalculating the chosen index to all iterations.
Step11: Analogously, the conclusion for the Relative Concentration index is that Sacramento is not significantly (under 5% of significance, because p-value > 5%) concentrated for the hispanic people.
Additionaly, it is possible to combine the null approaches establishing, for example, a permutation along with evenness of the frequency of the Sacramento hispanic group. With this, the conclusion of the Relative Concentration changes.
Step12: Relative Centralization
Using the same permutation approach for the Relative Centralization (RCE) segregation index
Step13: The conclusion is that the hispanic group is negatively significantly (as the point estimation is in the left side of the distribution) in terms of centralization. This behavior can be, somehow, inspected in the map as the composition tends to be more concentraded outside of the center of the overall region.
Comparative Inference
To compare two different values, the user can rely on the TwoValueTest function. Similar to the previous function, the user needs to pass two segregation SM classes to be compared, establish the number of iterations under null hypothesis with iterations_under_null, specify which type of null hypothesis the inference will iterate with null_approach argument and, also, can pass additional parameters for each segregation estimation.
Obs.
Step14: In this example, we are interested to assess the comparative segregation of the non-hispanic black people in the census tracts of the Riverside, CA, county between 2000 and 2010. Therefore, we extract the desired columns and add some auxiliary variables
Step15: Filtering Riverside County and desired years of the analysis
Step16: Merging it with desired map.
Step17: Map of 2000
Step18: Map of 2010
Step19: A question that may rise is "Was it more or less segregated than 2000?". To answer this, we rely on simulations to test the following hypothesis
Step20: We can see that Riverside was more segregated in 2000 than in 2010. But, was this point difference statistically significant? We use the random_label approach which consists in random labelling the data between the two periods and recalculating the Dissimilarity statistic (D) in each iteration and comparing it to the original value.
Step21: The TwoValueTest class also has a plotting method
Step22: To access the two-tailed p-value of the test
Step23: The conclusion is that, for the Dissimilarity index and 5% of significance, segregation in Riverside was not different between 2000 and 2010 (since p-value > 5%).
Comparative Gini
Analogously, the same steps can be made for the Gini segregation index.
Step24: The absence of significance is also present as the point estimation of the difference (vertical red line) is located in the middle of the distribution of the null hypothesis simulated.
Comparative Spatial Dissimilarity
As an example of a spatial index, comparative inference can be performed for the Spatial Dissimilarity Index (SD). For this, we use the counterfactual_composition approach as an example.
In this framework, the population of the group of interest in each unit is randomized with a constraint that depends on both cumulative density functions (cdf) of the group of interest composition to the group of interest frequency of each unit. In each unit of each iteration, there is a probability of 50\% of keeping its original value or swapping to its corresponding value according of the other composition distribution cdf that it is been compared against. | Python Code:
%matplotlib inline
import geopandas as gpd
from pysal.explore import segregation
import pysal.lib
import pandas as pd
import numpy as np
from pysal.explore.segregation.inference import SingleValueTest, TwoValueTest
Explanation: Inference Wrappers use cases
This is an example of the PySAL segregation framework to perform inference on a single value and comparative inference using simulations under the null hypothesis. Once the segregation classes are fitted, the user can perform inference to shed light for statistical significance in regional analysis. Currently, it is possible to make inference for a single measure or for two values of the same measure.
The summary of the inference wrappers is presented in the following Table:
| Inference Type | Class/Function | Function main Inputs | Function Outputs |
| :----------------- | :------------------- | :------------------------------------------------------: | :----------------------------------: |
| Single Value | SingleValueTest | seg_class, iterations_under_null, null_approach, two_tailed | p_value, est_sim, statistic |
| Two Value | TwoValueTest | seg_class_1, seg_class_2, iterations_under_null, null_approach | p_value, est_sim, est_point_diff |
Firstly let's import the module/functions for the use case:
End of explanation
s_map = gpd.read_file(pysal.lib.examples.get_path("sacramentot2.shp"))
s_map.columns
gdf = s_map[['geometry', 'HISP_', 'TOT_POP']]
Explanation: Then it's time to load some data to estimate segregation. We use the data of 2000 Census Tract Data for the metropolitan area of Sacramento, CA, USA.
We use a geopandas dataframe available in PySAL examples repository.
For more information about the data: https://github.com/pysal/pysal.lib/tree/master/pysal.lib/examples/sacramento2
End of explanation
gdf['composition'] = gdf['HISP_'] / gdf['TOT_POP']
gdf.plot(column = 'composition',
cmap = 'OrRd',
figsize=(20,10),
legend = True)
Explanation: We also can plot the spatial distribution of the composition of the Hispanic population over the tracts of Sacramento:
End of explanation
from pysal.explore.segregation.aspatial import Dissim
D = Dissim(gdf, 'HISP_', 'TOT_POP')
D.statistic
Explanation: Single Value
Dissimilarity
The SingleValueTest function expect to receive a pre-fitted segregation class and then it uses the underlying data to iterate over the null hypothesis and comparing the results with point estimation of the index. Thus, we need to firstly estimate some measure. We can fit the classic Dissimilarity index:
End of explanation
infer_D_eve = SingleValueTest(D, iterations_under_null = 1000, null_approach = "evenness", two_tailed = True)
Explanation: The question that may rise is "Is this value of 0.32 statistically significant under some pre-specified circumstance?". To answer this, it is possible to rely on the Infer_Segregation function to generate several values of the same index (in this case the Dissimilarity Index) under the hypothesis and compare them with the one estimated by the dataset of Sacramento. To generate 1000 values assuming evenness, you can run:
End of explanation
infer_D_eve.plot()
Explanation: This class has a quick plotting method to inspect the generated distribution with the estimated value from the sample (vertical red line):
End of explanation
infer_D_eve.est_sim.mean()
Explanation: It is possible to see that clearly the value of 0.3218 is far-right in the distribution indicating that the hispanic group is, indeed, significantly segregated in terms of the Dissimilarity index under evenness. You can also check the mean value of the distribution using the est_sim attribute which represents all the D draw from the simulations:
End of explanation
infer_D_eve.p_value
Explanation: The two-tailed p-value of the following hypothesis test:
$$H_0: under \ evenness, \ Sacramento \ IS \ NOT \ segregated \ in \ terms \ of \ the \ Dissimilarity \ index \ (D)$$
$$H_1: under \ evenness, \ Sacramento \ IS \ segregated \ in \ terms \ of \ the \ Dissimilarity \ index \ (D)$$
can be accessed with the p_value attribute:
End of explanation
infer_D_sys = SingleValueTest(D, iterations_under_null = 5000, null_approach = "systematic", two_tailed = True)
infer_D_sys.plot()
Explanation: Therefore, we can conclude that Sacramento is statistically segregated at 5% of significance level (p.value < 5%) in terms of D.
You can also test under different approaches for the null hypothesis:
End of explanation
from pysal.explore.segregation.spatial import RelativeConcentration
RCO = RelativeConcentration(gdf, 'HISP_', 'TOT_POP')
Explanation: The conclusions are analogous as the evenness approach.
Relative Concentration
The Infer_Segregation wrapper can handle any class of the PySAL segregation module. It is possible to use it in the Relative Concentration (RCO) segregation index:
End of explanation
infer_RCO_per = SingleValueTest(RCO, iterations_under_null = 1000, null_approach = "permutation", two_tailed = True)
infer_RCO_per.plot()
infer_RCO_per.p_value
Explanation: Since RCO is an spatial index (i.e. depends on the spatial context), it makes sense to use the permutation null approach. This approach relies on randomly allocating the sample values over the spatial units and recalculating the chosen index to all iterations.
End of explanation
infer_RCO_eve_per = SingleValueTest(RCO, iterations_under_null = 1000, null_approach = "even_permutation", two_tailed = True)
infer_RCO_eve_per.plot()
Explanation: Analogously, the conclusion for the Relative Concentration index is that Sacramento is not significantly (under 5% of significance, because p-value > 5%) concentrated for the hispanic people.
Additionaly, it is possible to combine the null approaches establishing, for example, a permutation along with evenness of the frequency of the Sacramento hispanic group. With this, the conclusion of the Relative Concentration changes.
End of explanation
from pysal.explore.segregation.spatial import RelativeCentralization
RCE = RelativeCentralization(gdf, 'HISP_', 'TOT_POP')
infer_RCE_per = SingleValueTest(RCE, iterations_under_null = 1000, null_approach = "permutation", two_tailed = True)
infer_RCE_per.plot()
Explanation: Relative Centralization
Using the same permutation approach for the Relative Centralization (RCE) segregation index:
End of explanation
import os
#os.chdir('path_to_zipfiles')
import geosnap
from geosnap.data.data import read_ltdb
sample = "LTDB_Std_All_Sample.zip"
full = "LTDB_Std_All_fullcount.zip"
read_ltdb(sample = sample, fullcount = full)
df_pre = geosnap.data.db.ltdb
df_pre.head()
Explanation: The conclusion is that the hispanic group is negatively significantly (as the point estimation is in the left side of the distribution) in terms of centralization. This behavior can be, somehow, inspected in the map as the composition tends to be more concentraded outside of the center of the overall region.
Comparative Inference
To compare two different values, the user can rely on the TwoValueTest function. Similar to the previous function, the user needs to pass two segregation SM classes to be compared, establish the number of iterations under null hypothesis with iterations_under_null, specify which type of null hypothesis the inference will iterate with null_approach argument and, also, can pass additional parameters for each segregation estimation.
Obs.: in this case, each measure has to be the same class as it would not make much sense to compare, for example, a Gini index with a Delta index
This example uses all census data that the user must provide your own copy of the external database.
A step-by-step procedure for downloading the data can be found here: https://github.com/spatialucr/geosnap/tree/master/geosnap/data.
After the user download the zip files, you must provide the path to these files.
End of explanation
df = df_pre[['n_nonhisp_black_persons', 'n_total_pop', 'year']]
df['geoid'] = df.index
df['state'] = df['geoid'].str[0:2]
df['county'] = df['geoid'].str[2:5]
df.head()
Explanation: In this example, we are interested to assess the comparative segregation of the non-hispanic black people in the census tracts of the Riverside, CA, county between 2000 and 2010. Therefore, we extract the desired columns and add some auxiliary variables:
End of explanation
df_riv = df[(df['state'] == '06') & (df['county'] == '065') & (df['year'].isin(['2000', '2010']))]
df_riv.head()
Explanation: Filtering Riverside County and desired years of the analysis:
End of explanation
map_url = 'https://raw.githubusercontent.com/renanxcortes/inequality-segregation-supplementary-files/master/Tracts_grouped_by_County/06065.json'
map_gpd = gpd.read_file(map_url)
gdf = map_gpd.merge(df_riv,
left_on = 'GEOID10',
right_on = 'geoid')[['geometry', 'n_nonhisp_black_persons', 'n_total_pop', 'year']]
gdf['composition'] = np.where(gdf['n_total_pop'] == 0, 0, gdf['n_nonhisp_black_persons'] / gdf['n_total_pop'])
gdf.head()
gdf_2000 = gdf[gdf.year == 2000]
gdf_2010 = gdf[gdf.year == 2010]
Explanation: Merging it with desired map.
End of explanation
gdf_2000.plot(column = 'composition',
cmap = 'OrRd',
figsize = (30,5),
legend = True)
Explanation: Map of 2000:
End of explanation
gdf_2010.plot(column = 'composition',
cmap = 'OrRd',
figsize = (30,5),
legend = True)
Explanation: Map of 2010:
End of explanation
D_2000 = Dissim(gdf_2000, 'n_nonhisp_black_persons', 'n_total_pop')
D_2010 = Dissim(gdf_2010, 'n_nonhisp_black_persons', 'n_total_pop')
D_2000.statistic - D_2010.statistic
Explanation: A question that may rise is "Was it more or less segregated than 2000?". To answer this, we rely on simulations to test the following hypothesis:
$$H_0: Segregation\ Measure_{2000} - Segregation\ Measure_{2010} = 0$$
Comparative Dissimilarity
End of explanation
compare_D_fit = TwoValueTest(D_2000, D_2010, iterations_under_null = 1000, null_approach = "random_label")
Explanation: We can see that Riverside was more segregated in 2000 than in 2010. But, was this point difference statistically significant? We use the random_label approach which consists in random labelling the data between the two periods and recalculating the Dissimilarity statistic (D) in each iteration and comparing it to the original value.
End of explanation
compare_D_fit.plot()
Explanation: The TwoValueTest class also has a plotting method:
End of explanation
compare_D_fit.p_value
Explanation: To access the two-tailed p-value of the test:
End of explanation
from pysal.explore.segregation.aspatial import GiniSeg
G_2000 = GiniSeg(gdf_2000, 'n_nonhisp_black_persons', 'n_total_pop')
G_2010 = GiniSeg(gdf_2010, 'n_nonhisp_black_persons', 'n_total_pop')
compare_G_fit = TwoValueTest(G_2000, G_2010, iterations_under_null = 1000, null_approach = "random_label")
compare_G_fit.plot()
Explanation: The conclusion is that, for the Dissimilarity index and 5% of significance, segregation in Riverside was not different between 2000 and 2010 (since p-value > 5%).
Comparative Gini
Analogously, the same steps can be made for the Gini segregation index.
End of explanation
from pysal.explore.segregation.spatial import SpatialDissim
SD_2000 = SpatialDissim(gdf_2000, 'n_nonhisp_black_persons', 'n_total_pop')
SD_2010 = SpatialDissim(gdf_2010, 'n_nonhisp_black_persons', 'n_total_pop')
compare_SD_fit = TwoValueTest(SD_2000, SD_2010, iterations_under_null = 500, null_approach = "counterfactual_composition")
compare_SD_fit.plot()
Explanation: The absence of significance is also present as the point estimation of the difference (vertical red line) is located in the middle of the distribution of the null hypothesis simulated.
Comparative Spatial Dissimilarity
As an example of a spatial index, comparative inference can be performed for the Spatial Dissimilarity Index (SD). For this, we use the counterfactual_composition approach as an example.
In this framework, the population of the group of interest in each unit is randomized with a constraint that depends on both cumulative density functions (cdf) of the group of interest composition to the group of interest frequency of each unit. In each unit of each iteration, there is a probability of 50\% of keeping its original value or swapping to its corresponding value according of the other composition distribution cdf that it is been compared against.
End of explanation |
15,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. currentmodule
Step1: The learner doesn't do any heavy lifting itself, it manages the creation a sub-graph
of auxiliary
Step2: Fitting the learner puts three copies of the OLS estimator in the path
Step3: The main estimator, fitted on all data, gets stored into the
learner_ attribute, while the others are stored in the
sublearners_. These attributes are generators that create
new sub-learners with fitted estimators when called upon.
To generate predictions, we can either use the sublearners_
generator create cross-validated predictions, or learner_
generator to generate predictions for the whole input set.
Similarly to above, we predict by specifying the job and the data to use.
Now however, we must also specify the output array to populate.
In particular, the learner will populate the columns given in the
output_columns attribute, which is set with the setup call. If you
don't want it to start populating from the first column, you can pass the
n_left_concats argument to setup. Here, we use the transform task,
which uses the sublearners_ generator to produce cross-validated
predictions.
Step4: In the above loop, a sub-segment of P is updated by each sublearner
spawned by the learner. To instead produce predictions for the full
dataset using the estimator fitted on all training data,
task the learner to predict.
To streamline job generation across tasks and different classes, ML-Ensemble
features a
Step5: ML-Ensemble follows the Scikit-learn API, so if you wish to update any
hyper-parameters of the estimator, use the get_params and set_params
API
Step6: <div class="alert alert-info"><h4>Note</h4><p>Updating the indexer on one learner updates the indexer on all</p></div>
learners that where initiated with the same instance.
Partitioning
We can create several other types of learners by
varying the estimation strategy. An especially interesting strategy is to
partition the training set and create several learners fitted on a given
partition. This will create one prediction feature per partition.
In the following example we fit the OLS model using two partitions and
three fold CV on each partition. Note that by passing the output array
as an argument during 'fit', we perform a fit and transform operation.
Step7: Each sub-learner records fit and predict times during fitting, and if
a scorer is passed scores the predictions as well. The learner aggregates
this data into a raw_data attribute in the form of a list.
More conveniently, the data attribute returns a dict with a specialized
representation that gives a tabular output directly
Step8: Preprocessing
We can easily create a preprocessing pipeline before fitting the estimator.
In general, several estimators will share the same preprocessing pipeline,
so we don't want to pipeline the transformations in the estimator itself–
this will result in duplicate transformers.
As with estimators, transformers too define a computational sub-graph given
a cross-validation strategy. Preprocessing pipelines are therefore wrapped
by the
Step9: To build the learner we pass the name of the transformer as
the preprocess argument
Step10: We now repeat the above process to fit the learner, starting with fitting
the transformer. By using the
Step11: Note that the cache now contains the transformers as well
Step12: And estimation data is collected on a partition basis
Step13: Parallel estimation
Since the learner and transformer class do not perform estimations themselves,
we are free to modify the estimation behavior. For instance, to parallelize
estimation with several learners, we don't want a nested loop over each learner,
but instead flatten the for loops for maximal concurrency.
This is the topic of our next walk through. Here we show how to parallelize
estimation with a single learner using multiple threads
Step14: For a slightly more high-level API for parallel computation on a single
instance (of any accepted class), we can turn to the | Python Code:
from mlens.utils.dummy import OLS
from mlens.parallel import Learner, Job
from mlens.index import FoldIndex
indexer = FoldIndex(folds=2)
learner = Learner(estimator=OLS(),
indexer=indexer,
name='ols')
Explanation: .. currentmodule:: mlens.parallel
Learner Mechanics
ML-Ensemble is designed to provide an easy user interface. But it is also designed
to be extremely flexible, all the wile providing maximum concurrency at minimal
memory consumption. The lower-level API that builds the ensemble and manages the
computations is constructed in as modular a fashion as possible.
The low-level API introduces a computational graph-like environment that you can
directly exploit to gain further control over your ensemble. In fact, building
your ensemble through the low-level API is almost as straight forward as using the
high-level API. In this tutorial, we will walk through the basics :class:Learner
and :class:Transformer class.
The Learner API
^^^^^^^^^^^^^^^
Basics
The base estimator of ML-Ensemble is the :class:Learner instance. A learner is a
wrapper around a generic estimator along with a cross-validation strategy. The job
of the learner is to manage all sub-computations required for fitting and prediction.
In fact, it's public methods are generators from sub-learners, that do the actual
computation. A learner is the parent node of an estimator's computational sub-graph
induced by the cross-validation strategy.
A learner is created by specifying an estimator and an indexer, along with a
set of optional arguments, most notably the name of the learner. Naming is important,
is it is used for cache referencing. If setting it manually, ensure you give the learner
a unique name.
End of explanation
import os, tempfile
import numpy as np
X = np.arange(20).reshape(10, 2)
y = np.random.rand(10)
# Specify a cache directory
path = []
# Run the setup routine
learner.setup(X, y, 'fit')
# Run
for sub_learner in learner.gen_fit(X, y):
sub_learner.fit(path)
print("Cached items:\n%r" % path)
Explanation: The learner doesn't do any heavy lifting itself, it manages the creation a sub-graph
of auxiliary :class:SubLearner nodes for each fold during estimation.
This process is dynamic: the sub-learners are temporary instances created for each
estimation.
To fit a learner, we need a cache reference. When fitting all estimators from the
main process, this reference can be a list. If not (e.g. multiprocessing), the
reference should instead be a str pointing to the path of the cache directory.
Prior to running a job (fit, predict, transform), the learner must be
configured on the given data by calling the setup method. This takes cares of
indexing the training set for cross-validation, assigning output columns et.c.
End of explanation
learner.collect(path)
Explanation: Fitting the learner puts three copies of the OLS estimator in the path:
one for each fold and one for the full dataset.
These are named as [name].[col_id].[fold_id]. To load these into the
learner, we need to call collect.
End of explanation
path = []
P = np.zeros((y.shape[0], 2))
learner.setup(X, y, 'transform', n_left_concats=1)
for sub_learner in learner.gen_transform(X, P):
sub_learner.transform(path)
print('Output:')
print(P)
print()
Explanation: The main estimator, fitted on all data, gets stored into the
learner_ attribute, while the others are stored in the
sublearners_. These attributes are generators that create
new sub-learners with fitted estimators when called upon.
To generate predictions, we can either use the sublearners_
generator create cross-validated predictions, or learner_
generator to generate predictions for the whole input set.
Similarly to above, we predict by specifying the job and the data to use.
Now however, we must also specify the output array to populate.
In particular, the learner will populate the columns given in the
output_columns attribute, which is set with the setup call. If you
don't want it to start populating from the first column, you can pass the
n_left_concats argument to setup. Here, we use the transform task,
which uses the sublearners_ generator to produce cross-validated
predictions.
End of explanation
job = Job(
job='predict',
stack=False,
split=True,
dir={},
targets=y,
predict_in=X,
predict_out=np.zeros((y.shape[0], 1))
)
learner.setup(job.predict_in, job.targets, job.job)
for sub_learner in learner(job.args(), 'main'):
sub_learner()
print('Output:')
print(job.predict_out)
print()
Explanation: In the above loop, a sub-segment of P is updated by each sublearner
spawned by the learner. To instead produce predictions for the full
dataset using the estimator fitted on all training data,
task the learner to predict.
To streamline job generation across tasks and different classes, ML-Ensemble
features a :class:Job class that manages job parameters.
The job class prevents code repetition and allows us to treat the learner
as a callable, enabling task-agnostic code:
End of explanation
print("Params before:")
print(learner.get_params())
learner.set_params(estimator__offset=1, indexer__folds=3)
print("Params after:")
print(learner.get_params())
Explanation: ML-Ensemble follows the Scikit-learn API, so if you wish to update any
hyper-parameters of the estimator, use the get_params and set_params
API:
End of explanation
from mlens.index import SubsetIndex
def mse(y, p): return np.mean((y - p) ** 2)
indexer = SubsetIndex(partitions=2, folds=2, X=X)
learner = Learner(estimator=OLS(),
indexer=indexer,
name='subsemble-ols',
scorer=mse,
verbose=True)
job.job = 'fit'
job.predict_out = np.zeros((y.shape[0], 2))
learner.setup(job.predict_in, job.targets, job.job)
for sub_learner in learner(job.args(), 'main'):
sub_learner.fit()
print('Output:')
print(job.predict_out)
print()
learner.collect()
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Updating the indexer on one learner updates the indexer on all</p></div>
learners that where initiated with the same instance.
Partitioning
We can create several other types of learners by
varying the estimation strategy. An especially interesting strategy is to
partition the training set and create several learners fitted on a given
partition. This will create one prediction feature per partition.
In the following example we fit the OLS model using two partitions and
three fold CV on each partition. Note that by passing the output array
as an argument during 'fit', we perform a fit and transform operation.
End of explanation
print("Data:\n%s" % learner.data)
Explanation: Each sub-learner records fit and predict times during fitting, and if
a scorer is passed scores the predictions as well. The learner aggregates
this data into a raw_data attribute in the form of a list.
More conveniently, the data attribute returns a dict with a specialized
representation that gives a tabular output directly:
Standard data is fit time (ft), predict time (pr).
If a scorer was passed to the learner, cross-validated test set prediction
scores are computed. For brevity, -m denotes the mean and -s
denotes standard deviation.
End of explanation
from mlens.utils.dummy import Scale
from mlens.parallel import Transformer, Pipeline
pipeline = Pipeline([('trans', Scale())], return_y=True)
transformer = Transformer(estimator=pipeline,
indexer=indexer,
name='sc',
verbose=True)
Explanation: Preprocessing
We can easily create a preprocessing pipeline before fitting the estimator.
In general, several estimators will share the same preprocessing pipeline,
so we don't want to pipeline the transformations in the estimator itself–
this will result in duplicate transformers.
As with estimators, transformers too define a computational sub-graph given
a cross-validation strategy. Preprocessing pipelines are therefore wrapped
by the :class:Transformer class, which is similar to the :class:Learner
class. The input to the Transformer is a :class:Pipeline instance that holds the
preprocessing pipeline.
<div class="alert alert-info"><h4>Note</h4><p>When constructing a :class:`Pipeline` for use with the :class:`Transformer`,
the ``return_y`` argument must be ``True``.</p></div>
To link the transformer's sub-graph with the learner's sub-graph,
we set the preprocess argument of the learner equal to the name
of the :class:Transformer. Note that any number of learners can share
the same transformer and in fact should when the same preprocessing is desired.
End of explanation
learner = Learner(estimator=OLS(),
preprocess='sc',
indexer=indexer,
scorer=mse,
verbose=True)
Explanation: To build the learner we pass the name of the transformer as
the preprocess argument:
End of explanation
# Reset the prediction output array
job.predict_out = np.zeros((y.shape[0], 2))
transformer.setup(job.predict_in, job.targets, job.job)
learner.setup(job.predict_in, job.targets, job.job)
# Turn split off when you don't want the args() call to spawn a new sub-cache
job.split = False
for subtransformer in transformer(job.args(), 'auxiliary'):
subtransformer()
for sublearner in learner(job.args(), 'main'):
sublearner()
transformer.collect()
learner.collect()
Explanation: We now repeat the above process to fit the learner, starting with fitting
the transformer. By using the :class:Job class, we can write task-agnostic
boiler-plate code. Note that the transformer is called as an
'auxiliary' task, while the learner is called as the 'main' task.
End of explanation
print("Cache:")
for item in job.dir['task_%i' % job._n_dir]:
print('{:20}{}'.format(*item))
Explanation: Note that the cache now contains the transformers as well:
End of explanation
print("Data:\n%s" % learner.data)
Explanation: And estimation data is collected on a partition basis:
End of explanation
from multiprocessing.dummy import Pool
def run(est): est()
args = job.args()
job.predict_out = np.zeros((y.shape[0], 2))
job.job = 'predict'
Pool(4).map(run, list(learner(args, 'main')))
Explanation: Parallel estimation
Since the learner and transformer class do not perform estimations themselves,
we are free to modify the estimation behavior. For instance, to parallelize
estimation with several learners, we don't want a nested loop over each learner,
but instead flatten the for loops for maximal concurrency.
This is the topic of our next walk through. Here we show how to parallelize
estimation with a single learner using multiple threads:
End of explanation
from mlens.parallel import run
print(
run(transformer, 'predict', X)
)
Explanation: For a slightly more high-level API for parallel computation on a single
instance (of any accepted class), we can turn to the :func:run function.
This function takes care of argument specification, array creation and all
details we would otherwise need to attend to. For instance, to transform
a dataset using the preprocessing pipeline fitted on the full training set,
use :func:run to call predict:
End of explanation |
15,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random notes while re-studying CS and DS while job hunting.
xrange vs range looping
For long for loops with no need to track iteration use
Step1: This will loop through 10 times, but the iteration variable won't be unused as it was never assigned. Also, xrange returns a type of iterator, whereas range returns a full list that can take a lot of memory for large loops.
Automating variable names
To assign a variable name and value in a loop fasion, use vars()[variable name as a string] = variable value. Such as
Step2: You can see the variables in memory with
Step3: Binary numbers and Python operators
A good review of Python operators can be found here
The wiki reviewing bitwise operations here OR here
Note that binary numbers follow
Step4: Ensuring two binary numbers are the same length
Step5: For bitwise or
Step6: bitwise or is |, xor is ^, and is &, complement (switch 0's to 1's, and 1's to 0's) is ~, binary shift left (move binary number two digits to left by adding zeros to its right) is <<, right >>
Convert the resulting binary number to base 10
Step7: Building a 'stack' in Python | Python Code:
for _ in xrange(10):
print "Do something"
Explanation: Random notes while re-studying CS and DS while job hunting.
xrange vs range looping
For long for loops with no need to track iteration use:
End of explanation
for i in range(1,10):
vars()['x'+str(i)] = i
Explanation: This will loop through 10 times, but the iteration variable won't be unused as it was never assigned. Also, xrange returns a type of iterator, whereas range returns a full list that can take a lot of memory for large loops.
Automating variable names
To assign a variable name and value in a loop fasion, use vars()[variable name as a string] = variable value. Such as:
End of explanation
print repr(dir())
print repr(x1)
print repr(x5)
Explanation: You can see the variables in memory with:
End of explanation
bin(21)[2:]
Explanation: Binary numbers and Python operators
A good review of Python operators can be found here
The wiki reviewing bitwise operations here OR here
Note that binary numbers follow:
2^4| 2^3| 2^2| 2^1| 2^0
1 0 -> 2+0 = 2
1 1 1 -> 4+2+1 = 7
1 0 1 0 1 -> 16+0+4+0+1 = 21
1 1 1 1 0 -> 16+8+4+2+0 = 30
Convert numbers from base 10 to binary with bin()
End of explanation
a = 123
b = 234
a, b = bin(a)[2:], bin(b)[2:]
print "Before evening their lengths:\n{}\n{}".format(a,b)
diff = len(a)-len(b)
if diff > 0:
b = '0' * diff + b
elif diff < 0:
a = '0' * abs(diff) + a
print "After evening their lengths:\n{}\n{}".format(a,b)
Explanation: Ensuring two binary numbers are the same length
End of explanation
s = ''
for i in range(len(a)):
s += str(int(a[i]) | int(b[i]))
print "{}\n{}\n{}\n{}".format(a, b, '-'*len(a), s)
Explanation: For bitwise or:
End of explanation
sum(map(lambda x: 2**x[0] if int(x[1]) else 0, enumerate(reversed(s))))
Explanation: bitwise or is |, xor is ^, and is &, complement (switch 0's to 1's, and 1's to 0's) is ~, binary shift left (move binary number two digits to left by adding zeros to its right) is <<, right >>
Convert the resulting binary number to base 10:
End of explanation
class Stack:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s=Stack()
print repr(s.isEmpty())+'\n'
s.push(4)
s.push('dog')
print repr(s.peek())+'\n'
s.push(True)
print repr(s.size())+'\n'
print repr(s.isEmpty())+'\n'
s.push(8.4)
print repr(s.pop())+'\n'
print repr(s.pop())+'\n'
print repr(s.size())+'\n'
Explanation: Building a 'stack' in Python
End of explanation |
15,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Content
Overviw
Accessing Rows
Element Access
Lab
Overview
Step2: One of the most effective use of pandas is the ease at which we can select rows and coloumns in different ways, here's how we do it
Step4: Accessing Rows
Step6: CSV to DataFrame2
Preface
Step8: Square Brackets
Preface
Selecting coloumns can be done in two way.
variable_containing_CSV_file['coloumn-name']
variable_containing_CSV_file[['coloumn-name']]
The former gives a pandas series, whereas the latter gives a pandas dataframe.
Instructions
Step10: Loc1
With loc we can do practically any data selection operation on DataFrames you can think of.
loc is label-based, which means that you have to specify rows and coloumns based on their row and coloumn labels.
Instructions | Python Code:
# import the pandas package
import pandas as pd
# load in the dataset and save it to brics var.
brics = pd.read_csv("C:/Users/pySag/Documents/GitHub/Computer-Science/Courses/DAT-208x/Datasets/BRICS_cummulative.csv")
brics
# we can make the table look more better, by adding a parameter index_col = 0
brics = pd.read_csv("C:/Users/pySag/Documents/GitHub/Computer-Science/Courses/DAT-208x/Datasets/BRICS_cummulative.csv", index_col=0)
brics #notice how the indexes assigned to row observation are now deprecated.
Explanation: Table of Content
Overviw
Accessing Rows
Element Access
Lab
Overview:
Being a data scientist means, we gotta have to work with "big" data with different types.
We've seen how 2D Numpy arrays gives us power to compute data in a much efficient way, but the only downside to it is, they must be of the same type.
To solve this issue, ther's where the Pandas package comes in. So what's in Pandas?
High-level data manupalation.
The concept of "Data Frames" objects.
Data is stored in such data frames.
More specifically, they are tables,
with "rows" represented as "observations".
"Coloumns" represented by "variables".
Each row has a unique label, same goes for coloumns as well.
Coloumns can have different types.
We typically don't make data frames manually.
We convert .csv (Comma seperated values) files to data frames.
We do this importing the pandas package:
import pandas as pd, again pd is an "alias".
Now we can use a built-in function that comes packaged with pandas called as:
read_csv(<path to .csv file)>
Example:
We will be using pandas package to import, read in the "brics dataset" into python, let's look how the dataframes look like:
End of explanation
# Add a new coloumn
brics["on_earth"] = [ True, True, True, True, True ]
# Print them
brics
# Manupalating Coloumns
Coloumns can be manipulated using arithematic operations
on other coloumns
Explanation: One of the most effective use of pandas is the ease at which we can select rows and coloumns in different ways, here's how we do it:
To access the coloumns, there are three different ways we can do it, these are:
data_set_var[ "coloumn-name" ]
< data_set_var >.< coloumn-name >
We can add coloumns too, say we rank them:
<data_set_var>["new-coloumn-name"] = < list of values >
End of explanation
# Import pandas as pd
import pandas as pd
# Import the cars.csv data: cars
cars = pd.read_csv("cars.csv")
# Print out cars
print(cars)
Explanation: Accessing Rows:
Syntax: dataframe.loc[ <"row name"> ]
Go to top:TOC
Element access
To get just one element in the table, we can specify both coloumn and row label in the loc().
Syntax:
dataframe.loc[ <"row-name, coloumn name"> ]
dataframe[ <"row-name"> ].loc[ <"coloumn-name"> ]
dataframe.loc[ <"rowName'> ][< "coloumnName" >]
Lab:
Objective:
Practice importing data into python as Pandas DataFrame.
Practise accessig Row and Coloumns
Lab content:
CSV to DataFrame1
CSV to DataFrame2
Square Brackets
Loc1
Loc2
Go to:TOC
CSV to DataFrame1
Preface:
The DataFrame is one of Pandas' most important data structures. It's basically a way to store tabular data, where you can label the rows and the columns.
In the exercises that follow, you will be working wit vehicle data in different countries. Each observation corresponds to a country, and the columns give information about the number of vehicles per capita, whether people drive left or right, and so on. This data is available in a CSV file, named cars.csv. It is available in your current working directory, so the path to the file is simply 'cars.csv'.
To import CSV data into Python as a Pandas DataFrame, you can use read_csv().
Instructions:
To import CSV files, you still need the pandas package: import it as pd.
Use pd.read_csv() to import cars.csv data as a DataFrame. Store this dataframe as cars.
Print out cars. Does everything look OK?
End of explanation
# Import pandas as pd
import pandas as pd
# Import the cars.csv data: cars
cars = pd.read_csv("cars.csv", index_col=0)
# Print out cars
print(cars)
Explanation: CSV to DataFrame2
Preface:
We have a slight of a problem, the row labels are imported as another coloumn, that has no name.
To fix this issue, we are goint to pass an argument index_col = 0 to read_csv(). This is used to specify which coloumn in the CSV file should be used as row label?
Instructions:
Run the code with Submit Answer and assert that the first column should actually be used as row labels.
Specify the index_col argument inside pd.read_csv(): set it to 0, so that the first column is used as row labels.
Has the printout of cars improved now?
Go to top:TOC
End of explanation
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Print out country column as Pandas Series
print( cars['country'])
# Print out country column as Pandas DataFrame
print( cars[['country']])
Explanation: Square Brackets
Preface
Selecting coloumns can be done in two way.
variable_containing_CSV_file['coloumn-name']
variable_containing_CSV_file[['coloumn-name']]
The former gives a pandas series, whereas the latter gives a pandas dataframe.
Instructions:
Use single square brackets to print out the country column of cars as a Pandas Series.
Use double square brackets to print out the country column of cars as a Pandas DataFrame. Do this by putting country in two square brackets this time.
End of explanation
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Print out observation for Japan
print( cars.loc['JAP'] )
# Print out observations for Australia and Egypt
print( cars.loc[ ['AUS', 'EG'] ])
Explanation: Loc1
With loc we can do practically any data selection operation on DataFrames you can think of.
loc is label-based, which means that you have to specify rows and coloumns based on their row and coloumn labels.
Instructions:
Use loc to select the observation corresponding to Japan as a Series. The label of this row is JAP. Make sure to print the resulting Series.
Use loc to select the observations for Australia and Egypt as a DataFrame.
End of explanation |
15,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code Search on Kubeflow
This notebook implements an end-to-end Semantic Code Search on top of Kubeflow - given an input query string, get a list of code snippets semantically similar to the query string.
NOTE
Step1: Install Pip Packages
Step2: Configure Variables
This involves setting up the Ksonnet application as well as utility environment variables for various CLI steps.
Set the following variables
PROJECT
Step3: Setup Authorization
In a Kubeflow cluster on GKE, we already have the Google Application Credentials mounted onto each Pod. We can simply point gcloud to activate that service account.
Step4: Additionally, to interact with the underlying cluster, we configure kubectl.
Step5: Collectively, these allow us to interact with Google Cloud Services as well as the Kubernetes Cluster directly to submit TFJobs and execute Dataflow pipelines.
Setup Ksonnet Application
We now point the Ksonnet application to the underlying Kubernetes cluster.
Step7: View Github Files
This is the query that is run as the first step of the Pre-Processing pipeline and is sent through a set of transformations. This is illustrative of the rows being processed in the pipeline we trigger next.
WARNING
Step8: Define an experiment
This solution consists of multiple jobs and servers that need to share parameters
To facilitate this we use experiments.libsonnet to define sets of parameters
Each set of parameters has a name corresponding to a key in the dictionary defined in experiments.libsonnet
You configure an experiment by defining a set of experiments in experiments.libsonnet
You then set the global parameter experiment to the name of the experiment defining the parameters you want
to use.
To get started define your experiment
Create a new entry containing a set of values to be used for your experiment
Pick a suitable name for your experiment.
Set the following values
outputDir
Step9: Submit the Dataflow Job
Step11: When completed successfully, this should create a dataset in BigQuery named target_dataset. Additionally, it also dumps CSV files into data_dir which contain training samples (pairs of function and docstrings) for our Tensorflow Model. A representative set of results can be viewed using the following query.
Step12: This pipeline also writes a set of CSV files which contain function and docstring pairs delimited by a comma. Here, we list a subset of them.
Step13: Prepare Dataset for Training
We will use t2t-datagen to convert the transformed data above into the TFRecord format.
TIP
Step14: Once this job finishes, the data directory should have a vocabulary file and a list of TFRecords prefixed by the problem name which in our case is github_function_docstring_extended. Here, we list a subset of them.
Step15: Execute Tensorflow Training
Once, the TFRecords are generated, we will use t2t-trainer to execute the training.
Step16: This will generate TensorFlow model checkpoints which is illustrated below.
Step17: Export Tensorflow Model
We now use t2t-exporter to export the TFModel.
Step18: Once completed, this will generate a TensorFlow SavedModel which we will further use for both online (via TF Serving) and offline inference (via Kubeflow Batch Prediction).
Step19: Compute Function Embeddings
In this step, we will use the exported model above to compute function embeddings via another Dataflow pipeline. A Python 2 module code_search.dataflow.cli.create_function_embeddings has been provided for this purpose. A list of all possible arguments can be seen below.
Configuration
First, select a Exported Model version from the ${WORKING_DIR}/output/export/Servo as seen above. This should be name of a folder with UNIX Seconds Timestamp like 1533685294. Below, we automatically do that by selecting the folder which represents the latest timestamp.
Step20: Modify experiments.libsonnet and set modelDir to the directory computed above
Run the Dataflow Job for Function Embeddings
Step22: When completed successfully, this should create another table in the same BigQuery dataset which contains the function embeddings for each existing data sample available from the previous Dataflow Job. Additionally, it also dumps a CSV file containing metadata for each of the function and its embeddings. A representative query result is shown below.
Step23: The pipeline also generates a set of CSV files which will be useful to generate the search index.
Step24: Create Search Index
We now create the Search Index from the computed embeddings. This facilitates k-Nearest Neighbor search to for semantically similar results.
Step25: Using the CSV files generated from the previous step, this creates an index using NMSLib. A unified CSV file containing all the code examples for a human-readable reverse lookup during the query, is also created.
Step26: Deploy the Web App
The included web app provides a simple way for users to submit queries
The web app includes two pieces
A Flask app that serves a simple UI for sending queries
The flask app also uses nmslib to provide fast lookups
A TF Serving instance to compute the embeddings for search queries
The ksonnet components for the web app are in a separate ksonnet application ks-web-app
A second web app is used so that we can optionally use ArgoCD to keep the serving components up to date.
Deploy an Inference Server
We've seen offline inference during the computation of embeddings. For online inference, we deploy the exported Tensorflow model above using Tensorflow Serving.
You need to set the parameter modelBasePath to the GCS directory where the model was exported
This will be a directory produced by the export model step
e.g gs
Step27: Deploy Search UI
We finally deploy the Search UI which allows the user to input arbitrary strings and see a list of results corresponding to semantically similar Python functions. This internally uses the inference server we just deployed.
We need to configure the index server to use the lookup file and CSV produced by the create search index step above
The values will be the values of the parameters lookupFile and indexFile that you set in experiments.libsonnet | Python Code:
%%bash
echo "Pip Version Info: " && python2 --version && python2 -m pip --version && echo
echo "Google Cloud SDK Info: " && gcloud --version && echo
echo "Ksonnet Version Info: " && ks version && echo
echo "Kubectl Version Info: " && kubectl version
Explanation: Code Search on Kubeflow
This notebook implements an end-to-end Semantic Code Search on top of Kubeflow - given an input query string, get a list of code snippets semantically similar to the query string.
NOTE: If you haven't already, see kubeflow/examples/code_search for instructions on how to get this notebook,.
Install dependencies
Let us install all the Python dependencies. Note that everything must be done with Python 2. This will take a while the first time.
Verify Version Information
End of explanation
! python2 -m pip install -U pip
# Code Search dependencies
! python2 -m pip install --user https://github.com/kubeflow/batch-predict/tarball/master
! python2 -m pip install --user -r src/requirements.txt
# BigQuery Cell Dependencies
! python2 -m pip install --user pandas-gbq
# NOTE: The RuntimeWarnings (if any) are harmless. See ContinuumIO/anaconda-issues#6678.
from pandas.io import gbq
Explanation: Install Pip Packages
End of explanation
import getpass
import subprocess
# Configuration Variables. Modify as desired.
PROJECT = subprocess.check_output(["gcloud", "config", "get-value", "project"]).strip()
# Dataflow Related Variables.
TARGET_DATASET = 'code_search'
WORKING_DIR = 'gs://{0}_code_search/workingDir'.format(PROJECT)
KS_ENV=getpass.getuser()
# DO NOT MODIFY. These are environment variables to be used in a bash shell.
%env PROJECT $PROJECT
%env TARGET_DATASET $TARGET_DATASET
%env WORKING_DIR $WORKING_DIR
Explanation: Configure Variables
This involves setting up the Ksonnet application as well as utility environment variables for various CLI steps.
Set the following variables
PROJECT: Set this to the GCP project you want to use
If gcloud has a project set this will be used by default
To use a different project or if gcloud doesn't have a project set you will need to configure one explicitly
WORKING_DIR: Override this if you don't want to use the default configured below
KS_ENV: Set this to the name of the ksonnet environment you want to create
End of explanation
%%bash
# Activate Service Account provided by Kubeflow.
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
Explanation: Setup Authorization
In a Kubeflow cluster on GKE, we already have the Google Application Credentials mounted onto each Pod. We can simply point gcloud to activate that service account.
End of explanation
%%bash
kubectl config set-cluster kubeflow --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-credentials jupyter --token "$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
kubectl config set-context kubeflow --cluster kubeflow --user jupyter
kubectl config use-context kubeflow
Explanation: Additionally, to interact with the underlying cluster, we configure kubectl.
End of explanation
%%bash
cd kubeflow
# Update Ksonnet to point to the Kubernetes Cluster
ks env add code-search
# Update Ksonnet to use the namespace where kubeflow is deployed. By default it's 'kubeflow'
ks env set code-search --namespace=kubeflow
# Update the Working Directory of the application
sed -i'' "s,gs://example/prefix,${WORKING_DIR}," components/params.libsonnet
# FIXME(sanyamkapoor): This command completely replaces previous configurations.
# Hence, using string replacement in file.
# ks param set t2t-code-search workingDir ${WORKING_DIR}
Explanation: Collectively, these allow us to interact with Google Cloud Services as well as the Kubernetes Cluster directly to submit TFJobs and execute Dataflow pipelines.
Setup Ksonnet Application
We now point the Ksonnet application to the underlying Kubernetes cluster.
End of explanation
query =
SELECT
MAX(CONCAT(f.repo_name, ' ', f.path)) AS repo_path,
c.content
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id
JOIN (
--this part of the query makes sure repo is watched at least twice since 2017
SELECT
repo
FROM (
SELECT
repo.name AS repo
FROM
`githubarchive.year.2017`
WHERE
type="WatchEvent"
UNION ALL
SELECT
repo.name AS repo
FROM
`githubarchive.month.2018*`
WHERE
type="WatchEvent" )
GROUP BY
1
HAVING
COUNT(*) >= 2 ) AS r
ON
f.repo_name = r.repo
WHERE
f.path LIKE '%.py' AND --with python extension
c.size < 15000 AND --get rid of ridiculously long files
REGEXP_CONTAINS(c.content, r'def ') --contains function definition
GROUP BY
c.content
LIMIT
10
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
Explanation: View Github Files
This is the query that is run as the first step of the Pre-Processing pipeline and is sent through a set of transformations. This is illustrative of the rows being processed in the pipeline we trigger next.
WARNING: The table is large and the query can take a few minutes to complete.
End of explanation
%%bash
bq mk ${PROJECT}:${TARGET_DATASET}
Explanation: Define an experiment
This solution consists of multiple jobs and servers that need to share parameters
To facilitate this we use experiments.libsonnet to define sets of parameters
Each set of parameters has a name corresponding to a key in the dictionary defined in experiments.libsonnet
You configure an experiment by defining a set of experiments in experiments.libsonnet
You then set the global parameter experiment to the name of the experiment defining the parameters you want
to use.
To get started define your experiment
Create a new entry containing a set of values to be used for your experiment
Pick a suitable name for your experiment.
Set the following values
outputDir: The GCS directory where the output should be written
train_steps: Numer oftraining steps
eval_steps: Number of steps to be used for eval
hparams_set: The set of hyperparameters to use; see some suggestions here.
transformer_tiny can be used to train a very small model suitable for ensuring the code works.
project: Set this to the GCP project you can use
modelDir:
After training your model set this to a GCS directory containing the export model
e.g gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/
problem: set this to "kf_github_function_docstring",
model: set this "kf_similarity_transformer",
lookupFile: set this to the GCS location of the CSV produced by the job to create the nmslib index of the embeddings for all GitHub data
indexFile: set this to the GCS location of the nmslib index for all the data in GitHub
Configure your ksonnet environment to use your experiment
Open kubeflow/environments/${ENVIRONMENT}/globals.libsonnet
Define the following global parameters
experiment: Name of your experiment; should correspond to a key defined in experiments.libsonnet
project: Set this to the GCP project you can use
dataDir: The data directory to be used by T2T
workingDir: Working directory
Here's an example of what the contents should look like
workingDir: "gs://code-search-demo/20181104",
dataDir: "gs://code-search-demo/20181104/data",
project: "code-search-demo",
experiment: "demo-trainer-11-07-dist-sync-gpu",
Pre-Processing Github Files
In this step, we use Google Cloud Dataflow to preprocess the data.
We use a K8s Job to run a python program code_search.dataflow.cli.preprocess_github_dataset that submits the Dataflow job
Once the job has been created it can be monitored using the Dataflow console
The parameter target_dataset specifies a BigQuery dataset to write the data to
Create the BigQuery dataset
End of explanation
%%bash
cd kubeflow
ks param set --env=code-search submit-preprocess-job targetDataset ${TARGET_DATASET}
ks apply code-search -c submit-preprocess-job
Explanation: Submit the Dataflow Job
End of explanation
query =
SELECT *
FROM
{}.token_pairs
LIMIT
10
.format(TARGET_DATASET)
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
Explanation: When completed successfully, this should create a dataset in BigQuery named target_dataset. Additionally, it also dumps CSV files into data_dir which contain training samples (pairs of function and docstrings) for our Tensorflow Model. A representative set of results can be viewed using the following query.
End of explanation
%%bash
LIMIT=10
gsutil ls ${WORKING_DIR}/data/*.csv | head -n ${LIMIT}
Explanation: This pipeline also writes a set of CSV files which contain function and docstring pairs delimited by a comma. Here, we list a subset of them.
End of explanation
%%bash
cd kubeflow
ks show code-search -c t2t-code-search-datagen
%%bash
cd kubeflow
ks apply code-search -c t2t-code-search-datagen
Explanation: Prepare Dataset for Training
We will use t2t-datagen to convert the transformed data above into the TFRecord format.
TIP: Use ks show to view the Resource Spec submitted.
End of explanation
%%bash
LIMIT=10
gsutil ls ${WORKING_DIR}/data/vocab*
gsutil ls ${WORKING_DIR}/data/*train* | head -n ${LIMIT}
Explanation: Once this job finishes, the data directory should have a vocabulary file and a list of TFRecords prefixed by the problem name which in our case is github_function_docstring_extended. Here, we list a subset of them.
End of explanation
%%bash
cd kubeflow
ks show code-search -c t2t-code-search-trainer
%%bash
cd kubeflow
ks apply code-search -c t2t-code-search-trainer
Explanation: Execute Tensorflow Training
Once, the TFRecords are generated, we will use t2t-trainer to execute the training.
End of explanation
%%bash
gsutil ls ${WORKING_DIR}/output/*ckpt*
Explanation: This will generate TensorFlow model checkpoints which is illustrated below.
End of explanation
%%bash
cd kubeflow
ks show code-search -c t2t-code-search-exporter
%%bash
cd kubeflow
ks apply code-search -c t2t-code-search-exporter
Explanation: Export Tensorflow Model
We now use t2t-exporter to export the TFModel.
End of explanation
%%bash
gsutil ls ${WORKING_DIR}/output/export/Servo
Explanation: Once completed, this will generate a TensorFlow SavedModel which we will further use for both online (via TF Serving) and offline inference (via Kubeflow Batch Prediction).
End of explanation
%%bash --out EXPORT_DIR_LS
gsutil ls ${WORKING_DIR}/output/export/Servo | grep -oE "([0-9]+)/$"
# WARNING: This routine will fail if no export has been completed successfully.
MODEL_VERSION = max([int(ts[:-1]) for ts in EXPORT_DIR_LS.split('\n') if ts])
# DO NOT MODIFY. These are environment variables to be used in a bash shell.
%env MODEL_VERSION $MODEL_VERSION
Explanation: Compute Function Embeddings
In this step, we will use the exported model above to compute function embeddings via another Dataflow pipeline. A Python 2 module code_search.dataflow.cli.create_function_embeddings has been provided for this purpose. A list of all possible arguments can be seen below.
Configuration
First, select a Exported Model version from the ${WORKING_DIR}/output/export/Servo as seen above. This should be name of a folder with UNIX Seconds Timestamp like 1533685294. Below, we automatically do that by selecting the folder which represents the latest timestamp.
End of explanation
%%bash
cd kubeflow
ks apply code-search -c submit-code-embeddings-job
Explanation: Modify experiments.libsonnet and set modelDir to the directory computed above
Run the Dataflow Job for Function Embeddings
End of explanation
query =
SELECT *
FROM
{}.function_embeddings
LIMIT
10
.format(TARGET_DATASET)
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
Explanation: When completed successfully, this should create another table in the same BigQuery dataset which contains the function embeddings for each existing data sample available from the previous Dataflow Job. Additionally, it also dumps a CSV file containing metadata for each of the function and its embeddings. A representative query result is shown below.
End of explanation
%%bash
LIMIT=10
gsutil ls ${WORKING_DIR}/data/*index*.csv | head -n ${LIMIT}
Explanation: The pipeline also generates a set of CSV files which will be useful to generate the search index.
End of explanation
%%bash
cd kubeflow
ks show code-search -c search-index-creator
%%bash
cd kubeflow
ks apply code-search -c search-index-creator
Explanation: Create Search Index
We now create the Search Index from the computed embeddings. This facilitates k-Nearest Neighbor search to for semantically similar results.
End of explanation
%%bash
gsutil ls ${WORKING_DIR}/code_search_index*
Explanation: Using the CSV files generated from the previous step, this creates an index using NMSLib. A unified CSV file containing all the code examples for a human-readable reverse lookup during the query, is also created.
End of explanation
%%bash
cd ks-web-app
ks param set --env=code-search modelBasePath ${MODEL_BASE_PATH}
ks show code-search -c query-embed-server
%%bash
cd ks-web-app
ks apply code-search -c query-embed-server
Explanation: Deploy the Web App
The included web app provides a simple way for users to submit queries
The web app includes two pieces
A Flask app that serves a simple UI for sending queries
The flask app also uses nmslib to provide fast lookups
A TF Serving instance to compute the embeddings for search queries
The ksonnet components for the web app are in a separate ksonnet application ks-web-app
A second web app is used so that we can optionally use ArgoCD to keep the serving components up to date.
Deploy an Inference Server
We've seen offline inference during the computation of embeddings. For online inference, we deploy the exported Tensorflow model above using Tensorflow Serving.
You need to set the parameter modelBasePath to the GCS directory where the model was exported
This will be a directory produced by the export model step
e.g gs://code-search-demo/models/20181107-dist-sync-gpu/export/
Here are sample contents
```
gs://code-search-demo/models/20181107-dist-sync-gpu/export/:
gs://code-search-demo/models/20181107-dist-sync-gpu/export/
gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/:
gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/
gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/saved_model.pbtxt
gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/variables/:
gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/variables/
gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/variables/variables.data-00000-of-00001
gs://code-search-demo/models/20181107-dist-sync-gpu/export/1541712907/variables/variables.index
```
TFServing expects the modelBasePath to consist of numeric subdirectories corresponding to different versions of the model.
Each subdirectory will contain the saved model in protocol buffer along with the weights.
End of explanation
%%bash
cd ks-web-app
ks param set --env=code-search search-index-server lookupFile ${LOOKUP_FILE}
ks param set --env=code-search search-index-server indexFile ${INDEX_FILE}
ks show code-search -c search-index-server
%%bash
cd kubeflow
ks apply code-search -c search-index-server
Explanation: Deploy Search UI
We finally deploy the Search UI which allows the user to input arbitrary strings and see a list of results corresponding to semantically similar Python functions. This internally uses the inference server we just deployed.
We need to configure the index server to use the lookup file and CSV produced by the create search index step above
The values will be the values of the parameters lookupFile and indexFile that you set in experiments.libsonnet
End of explanation |
15,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automatic alignment on labels
Step1: Series alignment
Let's define a table with natural increase rates in 2013 (data from World Bank)
Step2: Now we calculate the natural increae by subtracting death rate from birth rate
Step3: Note that the rows where the two series did not overlap contain missing values (NaN = Not a Number) and that the data were properly aligned on the index.
Step4: Missing values
We can remove the missing data using dropna method
Step5: <div class="alert alert-success">
<b>EXERCISE</b>
Step6: There are four different ways we can handle the missing rows
Step7: <div class="alert alert-success">
<b>EXERCISE</b>
Step8: You can also simply count members of each split
Step9: Movie database
These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here
Step10: <div class="alert alert-success">
<b>EXERCISE</b>
Step11: The values for the grouping array can be also computed from values in the data frame. For example, to count odd and even number in the data column we could simply
Step12: <div class="alert alert-success">
<b>EXERCISE</b>
Step13: Note that it creates a hierarchical index. More on that later.
<div class="alert alert-success">
<b>EXERCISE</b>
Step14: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data).set_index('country')
countries
Explanation: Automatic alignment on labels
End of explanation
death_rate = pd.Series([10, 9, 11, 8, 9],
index=['Poland','United Kingdom', 'Germany', 'Netherlands', 'France'])
print(death_rate)
birth_rate = pd.Series([10, 9, 10, 12],
index=['Netherlands', 'Germany', 'Poland', 'France'])
print(birth_rate)
Explanation: Series alignment
Let's define a table with natural increase rates in 2013 (data from World Bank):
End of explanation
natural_increase = birth_rate - death_rate
print(natural_increase)
Explanation: Now we calculate the natural increae by subtracting death rate from birth rate:
End of explanation
pop_change = pd.DataFrame({'death rate' : death_rate,
'birth rate' : birth_rate,
'natural increase' : natural_increase})
Explanation: Note that the rows where the two series did not overlap contain missing values (NaN = Not a Number) and that the data were properly aligned on the index.
End of explanation
pop_change.dropna(inplace=True)
pop_change
Explanation: Missing values
We can remove the missing data using dropna method:
End of explanation
countries.join(pop_change)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate estimated population in 2014 by summing the population and natural increase (remember that the natural increase is given per 1000 people).
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Calculate ratio of the highest and lowest estimated population in 2014.
</div>
Joining two data frames
Let's now try to add the data to the country data:
End of explanation
countries.join(pop_change, how='right')
Explanation: There are four different ways we can handle the missing rows:
left (default) — take all the rows of the left data frame
right — take all the rows of the right data frame
inner — take the common rows of both data frames
outer — take all the rows present in either or both data frames
Note that the methods are similar to SQL JOIN clause.
End of explanation
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
df.groupby('key').aggregate('sum') # np.sum
df.groupby('key').sum()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Try inner and outer join. What's the difference?
</div>
Groupby operations
Some 'theory': the groupby operation (split-apply-combine)
By "group by" we are referring to a process involving one or more of the following steps
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
<img src="img/splitApplyCombine.png">
Similar to SQL GROUP BY
The example of the image in pandas syntax:
End of explanation
df.groupby('key').size()
Explanation: You can also simply count members of each split:
End of explanation
cast = pd.read_csv('data/cast.csv')
cast[10:15]
titles = pd.read_csv('data/titles.csv')
titles.head()
Explanation: Movie database
These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
End of explanation
greek = ['α', 'β', 'β', 'β', 'β', 'α', 'β','α', 'α']
df.groupby(greek).max()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Using groupby(), plot the number of films that have been released each year in the history of cinema.
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Use groupby() to determine how many roles are listed for each of The Pink Panther movies.
</div>
Custom grouping criteria
You can also group by the values on another array under the condition that this array has the length equal to the number of rows:
End of explanation
df.groupby(df['data'] % 2).size()
Explanation: The values for the grouping array can be also computed from values in the data frame. For example, to count odd and even number in the data column we could simply:
End of explanation
df['type'] = np.where(df['data'] % 2, 'odd', 'even')
print(df)
df.groupby(['type', 'key']).sum()
df['type'] = np.where(df['data'] % 2, 'odd', 'even')
print(df)
df['data']
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Using groupby(), plot the number of films that have been released each **decade** in the history of cinema.
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Use groupby() to plot the number of "Hamlet" films made each decade.
</div>
Multiple groups
Note that you can also groupby on multiple keys:
End of explanation
titles.title.value_counts().head()
Explanation: Note that it creates a hierarchical index. More on that later.
<div class="alert alert-success">
<b>EXERCISE</b>: List each of the characters that Frank Oz has portrayed at least twice.
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: List, in order by year, each of the films in which Frank Oz has played more than 1 role.
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?
</div>
Value counts
A useful shortcut to calculate the number of occurences of certain values is value_counts (this is somewhat equivalent to df.groupby(key).size()))
For example, what are the most occuring movie titles?
End of explanation
def most_frequent(x):
return x.value_counts().index[0]
cast.loc[(cast['year'] >= 2010) & (cast['year'] < 2020), ['year', 'name']].groupby('year').agg(most_frequent)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What are the 11 most common character names in movie history?
</div>
Custom aggregate functions
Aggregate function could be any function accepting the Series object.
For example, let's calculate most frequent apperances in each year of last decade:
End of explanation |
15,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tests for QuTiP's SME solver against analytical solution for oscillator squeezing
Denis V. Vasilyev
1 August, 16 August 2013
Minor edits by Robert Johansson
5 August, 6 August 2013
Edits by Eric Giguere to fit Pull request #815 on Stochastic code
March 2018
We solve the stochastic master equation for an oscillator coupled to a 1D field as discussed in [1]. There is a deterministic differential equation for the variances of the oscillator quadratures $\langle\delta X^2\rangle$ and $\langle\delta P^2\rangle$. This allows for a direct comparison between the numerical solution and the exact solution for a single quantum trajectory.
In this section we solve SME with a single Wiener increment
Step1: Variance $V_{\mathrm{c}}$ as a function of time
Step2: Deviation from exact solution
Step3: Norm of the density matrix
Here we calculate $|\rho|-1$ which should be zero ideally.
Step4: Milstein method with multiple Wiener increments
In this section we solve the following SME
Step5: Variance $V_{\mathrm{c}}$ as a function of time
Step6: Deviation from exact solution
Step7: Norm of the density matrix
Here we calculate $|\rho|-1$ which should be zero ideally.
Step8: Software versions | Python Code:
%pylab inline
from qutip import *
from numpy import log2, cos, sin
from scipy.integrate import odeint
from qutip.cy.spmatfuncs import cy_expect_psi, spmv
th = 0.1 # Interaction parameter
alpha = cos(th)
beta = sin(th)
gamma = 1
# Exact steady state solution for Vc
Vc = (alpha*beta - gamma + sqrt((gamma-alpha*beta)**2 + 4*gamma*alpha**2))/(4*alpha**2)
#********* Model ************
NN = 200
tlist = linspace(0,5,NN)
Nsub = 10
N = 10
Id = qeye(N)
a = destroy(N)
s = 0.5*((alpha+beta)*a + (alpha-beta)*a.dag())
x = (a + a.dag())/sqrt(2)
H = Id
c_op = [sqrt(gamma)*a]
sc_op = [s]
e_op = [x, x*x]
rho0 = fock_dm(N,0) #initial vacuum state
# Solution of the differential equation for the variance Vc
y0 = 0.5
def func(y, t):
return -(gamma - alpha*beta)*y - 2*alpha*alpha*y*y + 0.5*gamma
y = odeint(func, y0, tlist)
# Righthand side for the Milstein method for a homodyne detection scheme
def rhs_milstein(L, rho_t, t, A_ops, dt, dW, d1, d2, args):
drho_t = spmv(L.data,
L.indices,
L.indptr, rho_t) * dt
A = A_ops[0]
M = A[0] + A[3]
e1 = cy_expect_rho_vec(M, rho_t)
d1_vec = spmv(A[7].data, A[7].indices, A[7].indptr, rho_t)
d2_vec = spmv(M.data, M.indices, M.indptr, rho_t)
d2_vec2 = spmv(M.data, M.indices, M.indptr, d2_vec)
e2 = cy_expect_rho_vec(M, d2_vec)
return rho_t + drho_t + d1_vec*dt + (d2_vec - e1*rho_t)*dW[0,0] + \
0.5 * (d2_vec2 - 2*e1*d2_vec + (-e2 + 2*e1*e1)*rho_t)*(dW[0,0]*dW[0,0] - dt)
#The rhs option of smesolve,
# Solution for the expectation values
sol = smesolve(H, rho0, tlist, c_op, sc_op, e_op,
nsubsteps=Nsub, method='homodyne', solver='euler', store_measurement=True)
#sol_mil = smesolve(H, rho0, tlist, c_op, sc_op, e_op,
# nsubsteps=Nsub, method='homodyne', rhs=rhs_milstein, noise=sol.noise)
#Built-in Milstein with single jump operator
sol_mil_native = smesolve(H, rho0, tlist, c_op, sc_op, e_op,
nsubsteps=Nsub, method='homodyne', solver='milstein', noise=sol.noise)
Explanation: Tests for QuTiP's SME solver against analytical solution for oscillator squeezing
Denis V. Vasilyev
1 August, 16 August 2013
Minor edits by Robert Johansson
5 August, 6 August 2013
Edits by Eric Giguere to fit Pull request #815 on Stochastic code
March 2018
We solve the stochastic master equation for an oscillator coupled to a 1D field as discussed in [1]. There is a deterministic differential equation for the variances of the oscillator quadratures $\langle\delta X^2\rangle$ and $\langle\delta P^2\rangle$. This allows for a direct comparison between the numerical solution and the exact solution for a single quantum trajectory.
In this section we solve SME with a single Wiener increment:
$\mathrm{d}\rho = D[s]\rho\mathrm{d}t + H[s]\rho \mathrm{d}W + \gamma D[a]\rho\mathrm{d}t$
The steady state solution for the variance $V_{\mathrm{c}} = \langle X^2\rangle - \langle X\rangle^2$ reads
$V_{\mathrm{c}} = \frac1{4\alpha^{2}}\left[\alpha\beta - \gamma + \sqrt{(\gamma-\alpha\beta )^{2} + 4\gamma \alpha^2}\right]$
where $\alpha$ and $\beta$ are parametrizing the interaction between light and the oscillator such that the jump operator is given by $s = \frac{\alpha+\beta}2 a + \frac{\alpha-\beta}2 a^{\dagger}$
[1] D. V. Vasilyev, C. a. Muschik, and K. Hammerer, Physical Review A 87, 053820 (2013). <a href="http://arxiv.org/abs/1303.5888">arXiv:1303.5888</a>
Implementation of Milstein method for homodyne detection
It is easy to implement the Milstein method [2] for a single Wiener increment using the given QuTiP infrastructure. For a stochastic differential equation $\mathrm{d}\rho = a(\rho)\mathrm{d}t + b(\rho) \mathrm{d}W\quad$ the Milstein scheme gives:
$\Delta \rho = a(\rho_n) \Delta t + b(\rho_n) \Delta W_n + \frac{1}{2} b(\rho_n) b'(\rho_n) \left( (\Delta W_n)^2 - \Delta t \right)$
The derivative can be calculated explicitly which is done below for a homodyne detection stochastic term.
[2] G. N. Milstein, Teor. Veroyatnost. i Primenen. 19, 583–588 (1974).
End of explanation
fig, ax = subplots()
ax.plot(tlist,sol.expect[1] - abs(sol.expect[0])**2, label='Euler-Maruyama')
#ax.plot(tlist,sol_mil.expect[1] - abs(sol_mil.expect[0])**2, label='Milstein')
ax.plot(tlist,sol_mil_native.expect[1] - abs(sol_mil_native.expect[0])**2, label='built-in Milstein')
ax.plot(tlist,Vc*ones(NN), label='exact steady state solution')
ax.plot(tlist,y, label="exact solution")
ax.legend();
Explanation: Variance $V_{\mathrm{c}}$ as a function of time
End of explanation
fig, ax = subplots()
ax.plot(tlist, y.T[0] - (sol.expect[1] - abs(sol.expect[0])**2), label='Euler-Maruyama')
#ax.plot(tlist, y.T[0] - (sol_mil.expect[1] - abs(sol_mil.expect[0])**2), label='Milstein')
ax.plot(tlist, y.T[0] - (sol_mil_native.expect[1] - abs(sol_mil_native.expect[0])**2), label='built-in Milstein')
ax.legend();
plot_expectation_values([sol,sol_mil_native]);
ax.legend()
Explanation: Deviation from exact solution
End of explanation
#Solution for the density matrix
sol2 = smesolve(H, rho0, tlist, c_op, sc_op, [], solver="euler",
nsubsteps=Nsub, method='homodyne', noise=sol.noise, options=Odeoptions(average_states=False))
sol2_mil = smesolve(H, rho0, tlist, c_op, sc_op, [], solver="milstein",
nsubsteps=Nsub, method='homodyne', noise=sol.noise,
options=Odeoptions(average_states=False))
fig, ax = subplots()
ax.plot(tlist,array([sol2.states[0][n].norm() - 1 for n in range(NN)]), label='Euler-Maruyama')
ax.plot(tlist,array([sol2_mil.states[0][n].norm() - 1 for n in range(NN)]), label='Milstein')
ax.legend()
Explanation: Norm of the density matrix
Here we calculate $|\rho|-1$ which should be zero ideally.
End of explanation
th = 0.1
alpha = cos(th)
beta = sin(th)
gamma = 1
eps = 0.3
VcEps = ((1-2*eps)*alpha*beta - gamma + \
sqrt((gamma-alpha*beta)**2 + 4*gamma*alpha*((1-eps)*alpha + eps*beta)))/(4*(1-eps)*alpha**2)
UcEps = (-(1-2*eps)*alpha*beta - gamma + \
sqrt((gamma-alpha*beta)**2 + 4*eps*beta*gamma*(beta-alpha)))/(4*eps*beta**2)
NN = 200
tlist = linspace(0,3,NN)
Nsub = 20
N = 10
Id = qeye(N)
a = destroy(N)
s = 0.5*((alpha+beta)*a + (alpha-beta)*a.dag())
x = (a + a.dag())/sqrt(2)
H = Id
c_op = [sqrt(gamma)*a]
sc_op = [sqrt(1-eps)*s, sqrt(eps)*1j*s]
e_op = [x, x*x]
rho0 = fock_dm(N,0)
y0 = 0.5
def func(y, t):
return -(gamma - (1-2*eps)*alpha*beta)*y - 2*(1-eps)*alpha*alpha*y*y + 0.5*(gamma + eps*beta*beta)
y = odeint(func, y0, tlist)
def funcZ(z, t):
return -(gamma + (1-2*eps)*alpha*beta)*z - 2*eps*beta*beta*z*z + 0.5*(gamma + (1-eps)*alpha*alpha)
z = odeint(funcZ, y0, tlist)
#Built-in taylor for multiple stochastic increments
sol_taylor = smesolve(H, rho0, tlist, c_op, sc_op, e_op,
nsubsteps=Nsub, method='homodyne', solver='taylor1.5',
options=Odeoptions(store_states=True, average_states=False))
sol = smesolve(H, rho0, tlist, c_op, sc_op, e_op, solver="euler", noise=sol_taylor.noise,
nsubsteps=Nsub, method='homodyne', store_measurement=True,
options=Odeoptions(store_states=True, average_states=False))
#Built-in Milstein for multiple stochastic increments
sol_mil = smesolve(H, rho0, tlist, c_op, sc_op, e_op, solver="milstein",
nsubsteps=Nsub, method='homodyne', noise=sol_taylor.noise,
options=Odeoptions(store_states=True, average_states=False))
Explanation: Milstein method with multiple Wiener increments
In this section we solve the following SME:
$\mathrm{d}\rho = D[s]\rho\mathrm{d}t + \sqrt{1-\epsilon}H[s]\rho \mathrm{d}W_1 + \sqrt{\epsilon}H[is]\rho \mathrm{d}W_2 + \gamma D[a]\rho\mathrm{d}t$
Analytical results can be found in [1].
We follow [3] in implementation of the Milstein scheme.
Stochastic equation is defined as
$dX^i = a^i(X)dt + \sum_{j=1}^M b^{i,j}(X)dW^j$
It is convenient to define a differential operator as follows
$L^j = \sum_{k=1}^N b^{k,j}\frac{\partial}{\partial x^k}$
Then the numerical scheme is
$Y^i_{n+1} = Y^i_n + a^i\Delta t + \sum_{j=1}^M b^{i,j}(X)\Delta W^j_n + \sum_{j_1,j_2=1}^M L^{j_1}b^{i,j_2} I_n(j_1,j_2)$
where $I_n(j_1,j_2) = \int_{t_n}^{t_{n+1}}\int_{t_n}^{s_1}dW_{s_2}^{j_1}dW_{s_1}^{j_2}$
Commutative noise
An impotant case is the commutative noise which means $L^{j_1}b^{k,j_2} = L^{j_2}b^{k,j_1}$. For the homodyne detection it means that the jump operators for different stochastic terms commute. In this case we have
$I_n(j_1,j_2) = I_n(j_2,j_1) = \frac12\Delta W^{j_1}_n \Delta W^{j_2}_n$
Evaluation of the derivatives $L^j$ for homodyne scheme provides us with the numerical scheme implemented below. We also have used the assumption of the commutative noise. The smesolve routine has to be modified. It should provide all the A_ops to the rhs function.
[1] D. V. Vasilyev, C. a. Muschik, and K. Hammerer, Physical Review A 87, 053820 (2013). <a href="http://arxiv.org/abs/1303.5888">arXiv:1303.5888</a>
[3] S. Cyganowski, L. Gruene, and P. E. Kloeden, MAPLE for Stochastic Differential Equations.
End of explanation
fig, ax = subplots()
ax.plot(tlist,sol.expect[1]-sol.expect[0]*sol.expect[0].conj(), label='Euler-Maruyama')
ax.plot(tlist,sol_mil.expect[1]-sol_mil.expect[0]*sol_mil.expect[0].conj(), label='Milstein expl.')
ax.plot(tlist,sol_taylor.expect[1]-sol_taylor.expect[0]*sol_taylor.expect[0].conj(), label='Taylor1.5')
ax.plot(tlist,VcEps*ones(NN), label='Exact steady state solution')
ax.plot(tlist,y, label='Exact solution')
ax.legend();
Explanation: Variance $V_{\mathrm{c}}$ as a function of time
End of explanation
fig, ax = subplots()
ax.plot(tlist, y.T[0] - (sol.expect[1] - abs(sol.expect[0])**2), label='Euler-Maruyama')
ax.plot(tlist, y.T[0] - (sol_mil.expect[1] - abs(sol_mil.expect[0])**2), label='Milstein expl.')
ax.plot(tlist, y.T[0] - (sol_taylor.expect[1] - abs(sol_taylor.expect[0])**2), label='Taylor1.5')
ax.legend();
Explanation: Deviation from exact solution
End of explanation
fig, ax = subplots()
ax.plot(tlist,array([sol.states[0][n].norm() - 1 for n in range(NN)]), label='Euler-Maruyama')
ax.plot(tlist,array([sol_mil.states[0][n].norm() - 1 for n in range(NN)]), label='Milstein')
ax.plot(tlist,array([sol_taylor.states[0][n].norm() - 1 for n in range(NN)]), label='Taylor1.5')
ax.legend()
Explanation: Norm of the density matrix
Here we calculate $|\rho|-1$ which should be zero ideally.
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Software versions
End of explanation |
15,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Futures Trading Considerations
by Maxwell Margenot and Delaney Mackenzie
Part of the Quantopian Lecture Series
Step1: Futures Calendar
An important feature of futures markets is the calendar used to trade them. Futures markets are open long after the equity markets close, though the effective periods within which you can trade with large amounts of liquidity tend to overlap. The specific high points in volume for futures contracts vary greatly from underlying to underlying. Despite this, the majority of the volume for many contracts typically falls within normal EST market hours.
Let's have a look at a day in the life of the S&P 500 Index E-Mini futures contract that was deliverable in March 2017.
Step2: This is one of the most liquid futures contracts and we see this significant increase in volume traded during normal equity trading hours. These hours can be even more tight for less liquid commodities. For example, let's look at how Feeder Cattle trades during the same time period on the same day.
Step3: If we are trying to trade multiple different underlyings with futures contracts in the same algorithm, we need to be conscious of their volume relative to each other. All trading algorithms are dependent on orders being executed as determined by their calculations. Some contracts are so illiquid that entering into even the smallest position will amount to becoming a large part of the volume for a given day. This could heavily impact slippage
Unsurprisingly, volume will also vary for different expiries on the same underlying. The front month contract, the contract closest to delivery, has the largest amount of volume. As we draw closer to delivery the front month's volume is eclipsed by the next expiry date as participants in the market close out their positions and roll them forward.
Step4: Futures Positions Have Inherent Leverage
In entering a futures position, you place down a certain amount of capital in a margin account. This margin account is exposed to the fluctuating futures price of the underlying that you have chosen. This creates a levered position off the bat as the value that you are exposed to (before delivery) in the account is different from the overall value that is on the hook at delivery.
This internal leverage is determined on a contract to contract basis due to the different multipliers involved for different underlyings.
Roll-over
If we want to maintain a futures position across expiries, we need to "roll over" our contracts. This is the practice of switching to the next month's contract after closing your previous holding. The majority of futures positions are either closed or rolled over before ever reaching delivery.
The futures contract with expiry closest to the current date is known as the "front month" contract. It usually enjoys the smallest spread between futures and spot prices as well as the most liquidity. In contrast, the futures contract that has the furthest expiration date in a set of contracts is known as the "back month" contract. Contracts that are further out have significantly less liquidity, though they still may contain vague information about future prices anticipated by the market.
By rolling forward our positions, we can maintain a hedge on a particular underlying or simply maintain a position across time. Without rolling contracts over we would be required to develop trading strategies that work only on a short timescale.
This graph illustrates the volume that results from rolling over contracts on the first date where the front month contract's volume is eclipsed by the following month on the same underlying.
Step5: In this particular instance, our goal is to ride the wave of liquidity provided by the front contract.
Continuous Futures
With futures, it is difficult to get a continuous series of historical prices. Each time that you roll forward to a new contract, the price series incurs a jump. This jump negatively impacts our analysis of prices as the discontinuity introduces shocks in our return and volatility measures that may not be representative of the actual changes in the underlying.
We use the continuous futures objects as part of the platform to get a continuous chain of historical data for futures contracts, taking these concerns into account. There are several ways to adjust for the cost of carry when looking at historical data, though people differ on what they prefer. The general consensus is that an adjustment should be done.
We can have a continuous future "roll" forward either based on calendar dates or based on the shift in volume from the front month contract to the next. The ContinuousFuture object is not a tradable asset, however. It is an API construct that abstracts the chain of consecutive contracts for the same underlying. They maintain ongoing references to the active contract in the chain and make it easier to to maintain a dynamic reference to contracts that you want to order as well as to get historical series of data, all based on your chosen method of adjustment and your desired roll method.
Step6: The above defined continuous future has an offset of $0$, indicating that we want it to reference the front month contract at each roll. Incrementing the offset causes the continuous future to instead monitor the contract that is displaced from the front month by that number.
Adjustments
We can define a continuous future to use multiplicative adjustments, additive adjustments, or no adjustments ('mul', 'add', None). The cost of carry that is realized as we shift from one contract to the next can be seen as the shock from a dividend payment. Adjustments are important to frame past prices relative to today's prices by including the cost of carry. Additive adjustments close the gaps betwen contracts by simply taking the differences and aggregating those back, while multiplicative adjustments scale previous prices using a ratio to close the gap. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from quantopian.research.experimental import continuous_future, history
Explanation: Futures Trading Considerations
by Maxwell Margenot and Delaney Mackenzie
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
In this lecture we will consider some practical implications for trading futures contracts. We will discuss the futures calendar and how it impacts trading as well as how to maintain futures positions across expiries.
End of explanation
contract = symbols('ESH17')
one_day_volume = get_pricing(contract, start_date='2017-02-01', end_date='2017-02-01', frequency='minute', fields='volume')
one_day_volume.tz_convert('EST').plot()
plt.title('Trading Volume for 3/01/2017 by Minute')
plt.xlabel('Minute')
plt.ylabel('Volume');
Explanation: Futures Calendar
An important feature of futures markets is the calendar used to trade them. Futures markets are open long after the equity markets close, though the effective periods within which you can trade with large amounts of liquidity tend to overlap. The specific high points in volume for futures contracts vary greatly from underlying to underlying. Despite this, the majority of the volume for many contracts typically falls within normal EST market hours.
Let's have a look at a day in the life of the S&P 500 Index E-Mini futures contract that was deliverable in March 2017.
End of explanation
contract = 'FCH17'
one_day_volume = get_pricing(contract, start_date='2017-02-01', end_date='2017-02-01', frequency='minute', fields='volume')
one_day_volume.tz_convert('EST').plot()
plt.title('Trading Volume for 3/01/2017 by Minute')
plt.xlabel('Minute')
plt.ylabel('Volume');
Explanation: This is one of the most liquid futures contracts and we see this significant increase in volume traded during normal equity trading hours. These hours can be even more tight for less liquid commodities. For example, let's look at how Feeder Cattle trades during the same time period on the same day.
End of explanation
contracts = symbols(['ESH16', 'ESM16', 'ESU16'])
rolling_volume = get_pricing(contracts, start_date='2015-12-15', end_date='2016-09-15', fields='volume')
rolling_volume.plot()
plt.title('Volume for Different Expiries of same Underlying')
plt.xlabel('Date')
plt.ylabel('Volume');
Explanation: If we are trying to trade multiple different underlyings with futures contracts in the same algorithm, we need to be conscious of their volume relative to each other. All trading algorithms are dependent on orders being executed as determined by their calculations. Some contracts are so illiquid that entering into even the smallest position will amount to becoming a large part of the volume for a given day. This could heavily impact slippage
Unsurprisingly, volume will also vary for different expiries on the same underlying. The front month contract, the contract closest to delivery, has the largest amount of volume. As we draw closer to delivery the front month's volume is eclipsed by the next expiry date as participants in the market close out their positions and roll them forward.
End of explanation
maximum_any_day_volume = rolling_volume.max(axis=1)
maximum_any_day_volume.name = 'Volume Roll-over'
rolling_volume.plot()
maximum_any_day_volume.plot(color='black', linestyle='--')
plt.title('Volume for Front Contract with Volume-based Rollover')
plt.xlabel('Date')
plt.ylabel('Volume')
plt.legend();
Explanation: Futures Positions Have Inherent Leverage
In entering a futures position, you place down a certain amount of capital in a margin account. This margin account is exposed to the fluctuating futures price of the underlying that you have chosen. This creates a levered position off the bat as the value that you are exposed to (before delivery) in the account is different from the overall value that is on the hook at delivery.
This internal leverage is determined on a contract to contract basis due to the different multipliers involved for different underlyings.
Roll-over
If we want to maintain a futures position across expiries, we need to "roll over" our contracts. This is the practice of switching to the next month's contract after closing your previous holding. The majority of futures positions are either closed or rolled over before ever reaching delivery.
The futures contract with expiry closest to the current date is known as the "front month" contract. It usually enjoys the smallest spread between futures and spot prices as well as the most liquidity. In contrast, the futures contract that has the furthest expiration date in a set of contracts is known as the "back month" contract. Contracts that are further out have significantly less liquidity, though they still may contain vague information about future prices anticipated by the market.
By rolling forward our positions, we can maintain a hedge on a particular underlying or simply maintain a position across time. Without rolling contracts over we would be required to develop trading strategies that work only on a short timescale.
This graph illustrates the volume that results from rolling over contracts on the first date where the front month contract's volume is eclipsed by the following month on the same underlying.
End of explanation
continuous_corn = continuous_future('CN', offset=0, roll='calendar', adjustment='mul')
Explanation: In this particular instance, our goal is to ride the wave of liquidity provided by the front contract.
Continuous Futures
With futures, it is difficult to get a continuous series of historical prices. Each time that you roll forward to a new contract, the price series incurs a jump. This jump negatively impacts our analysis of prices as the discontinuity introduces shocks in our return and volatility measures that may not be representative of the actual changes in the underlying.
We use the continuous futures objects as part of the platform to get a continuous chain of historical data for futures contracts, taking these concerns into account. There are several ways to adjust for the cost of carry when looking at historical data, though people differ on what they prefer. The general consensus is that an adjustment should be done.
We can have a continuous future "roll" forward either based on calendar dates or based on the shift in volume from the front month contract to the next. The ContinuousFuture object is not a tradable asset, however. It is an API construct that abstracts the chain of consecutive contracts for the same underlying. They maintain ongoing references to the active contract in the chain and make it easier to to maintain a dynamic reference to contracts that you want to order as well as to get historical series of data, all based on your chosen method of adjustment and your desired roll method.
End of explanation
continuous_corn_price = history(continuous_corn, start_date='2009-01-01', end_date='2016-01-01', fields='price')
continuous_corn_price.plot();
Explanation: The above defined continuous future has an offset of $0$, indicating that we want it to reference the front month contract at each roll. Incrementing the offset causes the continuous future to instead monitor the contract that is displaced from the front month by that number.
Adjustments
We can define a continuous future to use multiplicative adjustments, additive adjustments, or no adjustments ('mul', 'add', None). The cost of carry that is realized as we shift from one contract to the next can be seen as the shock from a dividend payment. Adjustments are important to frame past prices relative to today's prices by including the cost of carry. Additive adjustments close the gaps betwen contracts by simply taking the differences and aggregating those back, while multiplicative adjustments scale previous prices using a ratio to close the gap.
End of explanation |
15,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure 2 csv data generation
Figure data consolidation for Figure 2, which deals with alpha and beta diversity of samples
Figure 2a
Step1: Figure 2b
Step2: Figure 2c
Step3: Figure 2d
Step4: Write to Excel notebook | Python Code:
# Load up metadata map
metadata_fp = '../../../data/mapping-files/emp_qiime_mapping_qc_filtered.tsv'
metadata = pd.read_csv(metadata_fp, header=0, sep='\t')
metadata.head()
metadata.columns
# take just the columns we need for this figure panel
fig2a = metadata.loc[:,['#SampleID','empo_1','empo_3','adiv_observed_otus']]
fig2a.head()
Explanation: Figure 2 csv data generation
Figure data consolidation for Figure 2, which deals with alpha and beta diversity of samples
Figure 2a: alpha diversity plot
For this figure, we need to output an Excel sheet that contains the following columns parsed from the universal metadata file:
observed sequences
Empo level 3
End of explanation
# take just the columns we need for this figure panel, and drop values with more than one NaN
fig2b = metadata.loc[:,['#SampleID','temperature_deg_c','ph','adiv_observed_otus']].dropna(thresh=3)
fig2b.head()
Explanation: Figure 2b: alpha diversity by temperature and pH
For this figure, we need to output an Excel sheet that contains the following columns parsed from the universal metadata file:
observed sequences
pH
temperature
End of explanation
pcoa = pd.read_csv('./emp_90_gg_1k_unweighted_unifrac.txt.pc.first_ten',
header=None,
sep='\t',
names = ['#SampleID','PC1','PC2','PC3','PC4','PC5','PC6','PC7','PC8','PC9','PC10'])
empo = metadata.loc[:,['#SampleID','empo_0','empo_1','empo_2','empo_3']]
fig2c = pd.merge(left = empo, right = pcoa)
fig2c.head()
Explanation: Figure 2c: beta diversity PCoA
For this figure, we need to output an Excel sheet that contains PCoA coordinate values merged with EMPO information
End of explanation
# load in rRNA info
fig2d = pd.read_csv('../../../data/predicted-rrna-copy-number/emp_rrna_averagecopy_empo.csv')
fig2d.head()
Explanation: Figure 2d: Estimated rRNA operon copy number by environment
For this figure, we will output each sample as a separate row with calculated rRNA info, plus metadata
End of explanation
fig2 = pd.ExcelWriter('Figure2_data.xlsx')
fig2a.to_excel(fig2,'Fig-2a')
fig2b.to_excel(fig2,'Fig-2b')
fig2c.to_excel(fig2,'Fig-2c')
fig2d.to_excel(fig2,'Fig-2d')
fig2.save()
Explanation: Write to Excel notebook
End of explanation |
15,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mh', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-MH
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
15,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: FASTQ 数据
FASTQ 是一种常见的基因组学文件格式,除了基本的质量信息外,还存储序列信息。
首先,让我们下载一个样本 fastq 文件。
Step3: 读取 FASTQ 数据
现在,让我们使用 tfio.genome.read_fastq 读取此文件(请注意,tf.data API 即将发布)。
Step4: 如您所见,返回的 fastq_data 具有 fastq_data.sequences,后者是 fastq 文件中所有序列的字符串张量(大小可以不同);并具有 fastq_data.raw_quality,其中包含与在序列中读取的每个碱基的质量有关的 Phred 编码质量信息。
质量
如有兴趣,您可以使用辅助运算将此质量信息转换为概率。
Step5: 独热编码
您可能还需要使用独热编码器对基因组序列数据(由 A T C G 碱基组成)进行编码。有一项内置运算可以帮助编码。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
import tensorflow_io as tfio
import tensorflow as tf
Explanation: <table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/io/tutorials/genome"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/genome.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/genome.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/genome.ipynb">{img1下载笔记本</a></td>
</table>
概述
本教程将演示 tfio.genome 软件包,其中提供了常用的基因组学 IO 功能,即读取多种基因组学文件格式,以及提供一些用于准备数据(例如,独热编码或将 Phred 质量解析为概率)的常用运算。
此软件包使用 Google Nucleus 库来提供一些核心功能。
设置
End of explanation
# Download some sample data:
!curl -OL https://raw.githubusercontent.com/tensorflow/io/master/tests/test_genome/test.fastq
Explanation: FASTQ 数据
FASTQ 是一种常见的基因组学文件格式,除了基本的质量信息外,还存储序列信息。
首先,让我们下载一个样本 fastq 文件。
End of explanation
fastq_data = tfio.genome.read_fastq(filename="test.fastq")
print(fastq_data.sequences)
print(fastq_data.raw_quality)
Explanation: 读取 FASTQ 数据
现在,让我们使用 tfio.genome.read_fastq 读取此文件(请注意,tf.data API 即将发布)。
End of explanation
quality = tfio.genome.phred_sequences_to_probability(fastq_data.raw_quality)
print(quality.shape)
print(quality.row_lengths().numpy())
print(quality)
Explanation: 如您所见,返回的 fastq_data 具有 fastq_data.sequences,后者是 fastq 文件中所有序列的字符串张量(大小可以不同);并具有 fastq_data.raw_quality,其中包含与在序列中读取的每个碱基的质量有关的 Phred 编码质量信息。
质量
如有兴趣,您可以使用辅助运算将此质量信息转换为概率。
End of explanation
print(tfio.genome.sequences_to_onehot.__doc__)
print(tfio.genome.sequences_to_onehot.__doc__)
Explanation: 独热编码
您可能还需要使用独热编码器对基因组序列数据(由 A T C G 碱基组成)进行编码。有一项内置运算可以帮助编码。
End of explanation |
15,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Sampling
Copyright 2016 Allen Downey
License
Step1: Part One
Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters
Step2: Here's what that distribution looks like
Step3: make_sample draws a random sample from this distribution. The result is a NumPy array.
Step4: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
Step5: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean
Step6: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
Step7: The next line runs the simulation 1000 times and puts the results in
sample_means
Step8: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
Step9: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
Step10: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
Step11: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results
Step14: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
Step15: Here's a test run with n=100
Step16: Now we can use interact to run plot_sampling_distribution with different values of n. Note
Step17: Other sample statistics
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.
Exercise 1
Step24: STOP HERE
We will regroup and discuss before going on.
Part Two
So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't be doing statistical inference in the first place!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
Step25: The following function instantiates a Resampler and runs it.
Step26: Here's a test run with n=100
Step27: Now we can use interact_func in an interaction
Step28: Exercise 2
Step29: Test your code using the cell below
Step30: When your StdResampler is working, you should be able to interact with it
Step31: STOP HERE
We will regroup and discuss before going on.
Part Three
We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data)
Step32: And here's the men's distribution
Step33: I'll simulate a sample of 100 men and 100 women
Step34: The difference in means should be about 17 kg, but will vary from one random sample to the next
Step36: Here's the function that computes Cohen's effect size again
Step37: The difference in weight between men and women is about 1 standard deviation
Step38: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
Step39: Now we can instantiate a CohenResampler and plot the sampling distribution. | Python Code:
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# seed the random number generator so we all get the same results
numpy.random.seed(18)
# some nicer colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
Explanation: Random Sampling
Copyright 2016 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
weight = scipy.stats.lognorm(0.23, 0, 70.8)
weight.mean(), weight.std()
Explanation: Part One
Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters:
End of explanation
xs = numpy.linspace(20, 160, 100)
ys = weight.pdf(xs)
pyplot.plot(xs, ys, linewidth=4, color=COLOR1)
pyplot.xlabel('weight (kg)')
pyplot.ylabel('PDF')
None
Explanation: Here's what that distribution looks like:
End of explanation
def make_sample(n=100):
sample = weight.rvs(n)
return sample
Explanation: make_sample draws a random sample from this distribution. The result is a NumPy array.
End of explanation
sample = make_sample(n=100)
sample.mean(), sample.std()
Explanation: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
End of explanation
def sample_stat(sample):
return sample.mean()
Explanation: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean:
End of explanation
def compute_sampling_distribution(n=100, iters=1000):
stats = [sample_stat(make_sample(n)) for i in range(iters)]
return numpy.array(stats)
Explanation: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
End of explanation
sample_means = compute_sampling_distribution(n=100, iters=1000)
Explanation: The next line runs the simulation 1000 times and puts the results in
sample_means:
End of explanation
pyplot.hist(sample_means, color=COLOR5)
pyplot.xlabel('sample mean (n=100)')
pyplot.ylabel('count')
None
Explanation: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
End of explanation
sample_means.mean()
Explanation: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
End of explanation
std_err = sample_means.std()
std_err
Explanation: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
End of explanation
conf_int = numpy.percentile(sample_means, [5, 95])
conf_int
Explanation: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results:
End of explanation
def plot_sampling_distribution(n, xlim=None):
Plot the sampling distribution.
n: sample size
xlim: [xmin, xmax] range for the x axis
sample_stats = compute_sampling_distribution(n, iters=1000)
se = numpy.std(sample_stats)
ci = numpy.percentile(sample_stats, [5, 95])
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(xlim)
text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))
text(0.03, 0.85, 'SE %0.2f' % se)
pyplot.show()
def text(x, y, s):
Plot a string at a given location in axis coordinates.
x: coordinate
y: coordinate
s: string
ax = pyplot.gca()
pyplot.text(x, y, s,
horizontalalignment='left',
verticalalignment='top',
transform=ax.transAxes)
Explanation: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
End of explanation
plot_sampling_distribution(100)
Explanation: Here's a test run with n=100:
End of explanation
def sample_stat(sample):
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sampling_distribution, n=slider, xlim=fixed([55, 95]))
None
Explanation: Now we can use interact to run plot_sampling_distribution with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.
End of explanation
def sample_stat(sample):
# TODO: replace the following line with another sample statistic
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sampling_distribution, n=slider, xlim=fixed([0, 100]))
None
Explanation: Other sample statistics
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.
Exercise 1: Fill in sample_stat below with any of these statistics:
Standard deviation of the sample.
Coefficient of variation, which is the sample standard deviation divided by the sample standard mean.
Min or Max
Median (which is the 50th percentile)
10th or 90th percentile.
Interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
NumPy array methods you might find useful include std, min, max, and percentile.
Depending on the results, you might want to adjust xlim.
End of explanation
class Resampler(object):
Represents a framework for computing sampling distributions.
def __init__(self, sample, xlim=None):
Stores the actual sample.
self.sample = sample
self.n = len(sample)
self.xlim = xlim
def resample(self):
Generates a new sample by choosing from the original
sample with replacement.
new_sample = numpy.random.choice(self.sample, self.n, replace=True)
return new_sample
def sample_stat(self, sample):
Computes a sample statistic using the original sample or a
simulated sample.
return sample.mean()
def compute_sampling_distribution(self, iters=1000):
Simulates many experiments and collects the resulting sample
statistics.
stats = [self.sample_stat(self.resample()) for i in range(iters)]
return numpy.array(stats)
def plot_sampling_distribution(self):
Plots the sampling distribution.
sample_stats = self.compute_sampling_distribution()
se = sample_stats.std()
ci = numpy.percentile(sample_stats, [5, 95])
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(self.xlim)
text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))
text(0.03, 0.85, 'SE %0.2f' % se)
pyplot.show()
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Two
So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't be doing statistical inference in the first place!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
End of explanation
def interact_func(n, xlim):
sample = weight.rvs(n)
resampler = Resampler(sample, xlim=xlim)
resampler.plot_sampling_distribution()
Explanation: The following function instantiates a Resampler and runs it.
End of explanation
interact_func(n=100, xlim=[50, 100])
Explanation: Here's a test run with n=100
End of explanation
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(interact_func, n=slider, xlim=fixed([50, 100]))
None
Explanation: Now we can use interact_func in an interaction:
End of explanation
# Solution goes here
Explanation: Exercise 2: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.
End of explanation
def interact_func2(n, xlim):
sample = weight.rvs(n)
resampler = StdResampler(sample, xlim=xlim)
resampler.plot_sampling_distribution()
interact_func2(n=100, xlim=[0, 100])
Explanation: Test your code using the cell below:
End of explanation
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(interact_func2, n=slider, xlim=fixed([0, 100]))
None
Explanation: When your StdResampler is working, you should be able to interact with it:
End of explanation
female_weight = scipy.stats.lognorm(0.23, 0, 70.8)
female_weight.mean(), female_weight.std()
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Three
We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):
End of explanation
male_weight = scipy.stats.lognorm(0.20, 0, 87.3)
male_weight.mean(), male_weight.std()
Explanation: And here's the men's distribution:
End of explanation
female_sample = female_weight.rvs(100)
male_sample = male_weight.rvs(100)
Explanation: I'll simulate a sample of 100 men and 100 women:
End of explanation
male_sample.mean() - female_sample.mean()
Explanation: The difference in means should be about 17 kg, but will vary from one random sample to the next:
End of explanation
def CohenEffectSize(group1, group2):
Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
Explanation: Here's the function that computes Cohen's effect size again:
End of explanation
CohenEffectSize(male_sample, female_sample)
Explanation: The difference in weight between men and women is about 1 standard deviation:
End of explanation
class CohenResampler(Resampler):
def __init__(self, group1, group2, xlim=None):
self.group1 = group1
self.group2 = group2
self.xlim = xlim
def resample(self):
n, m = len(self.group1), len(self.group2)
group1 = numpy.random.choice(self.group1, n, replace=True)
group2 = numpy.random.choice(self.group2, m, replace=True)
return group1, group2
def sample_stat(self, groups):
group1, group2 = groups
return CohenEffectSize(group1, group2)
Explanation: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
End of explanation
resampler = CohenResampler(male_sample, female_sample)
resampler.plot_sampling_distribution()
Explanation: Now we can instantiate a CohenResampler and plot the sampling distribution.
End of explanation |
15,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification and Regression
There are two major types of supervised machine learning problems, called classification and regression.
In classification, the goal is to predict a class label, which is a choice from a predefined list of possibilities. In Intro_to_Decision_Trees.ipynb we used the example of classifying irises into one of three possible species. Classification is sometimes separated into binary classification, which is the special case of distinguishing between exactly two classes, and multiclass classification, which is classification between more than two classes. You can think of binary classification as trying to answer a yes/no question. Classifying emails as either spam or not spam is an example of a binary classification problem. In this binary classification task, the yes/no question being asked would be “Is this email spam?”
For regression tasks, the goal is to predict a continuous number, or a floating-point number in programming terms (or real number in mathematical terms). Predicting a person’s annual income from their education, their age, and where they live is an example of a regression task. When predicting income, the predicted value is an amount, and can be any number in a given range. Another example of a regression task is predicting the yield of a corn farm given attributes such as previous yields, weather, and number of employees working on the farm. The yield again can be an arbitrary number.
An easy way to distinguish between classification and regression tasks is to ask whether there is some kind of continuity in the output. If there is continuity between possible outcomes, then the problem is a regression problem. Think about predicting annual income. There is a clear continuity in the output. Whether a person makes $40,000 or $40,001 a year does not make a tangible difference, even though these are different amounts of money; if our algorithm predicts $39,999 or $40,001 when it should have predicted $40,000, we don’t mind that much.
By contrast, for the task of recognizing the language of a website (which is a classification problem), there is no matter of degree. A website is in one language, or it is in another. There is no continuity between languages, and there is no language that is between English and French.
Disclaimer
Step1: A First Application
Step2: Measuring Success
Step3: First things first
Step4: From the plots, we can see RM has a strong positive linear relationship with MEDV and LSTAT has a strong negative one. This makes sense - the housing price should go up as the number of rooms increases and the housing prices should go down as the percentage of lower class/income families in the neighborhood increases.
Step5: Building your model
Step6: The lr object encapsulates the algorithm that will be used to build the model from the training data, as well the algorithm to make predictions on new data points. It will also hold the information that the algorithm has extracted from the training data.
To build the model on the training set, we call the fit method of the lr object, which takes as arguments the NumPy array X_train containing the training data and the NumPy array y_train of the corresponding training labels.
Step7: The “slope” parameters (w), also called weights or coefficients, are stored in the coef_ attribute, while the offset or intercept (b) is stored in the intercept_ attribute
Step8: The intercept_ attribute is always a single float number, while the coef_ attribute is a NumPy array with one entry per input feature. As we only have 13 input features in this dataset, lr.coef_ has 13 entries.
Let’s look at the training set and test set performance
Step9: An R^2 of around 0.64 on the test set is not very good, but we can see that the scores on the training and test sets are are a decent distance apart. This means we are likely overfitting. With higher-dimensional datasets (meaning datasets with a large number of features), linear models become more powerful, and there is a higher chance of overfitting. More complicated linear models such as Ridge Regression and Lasso have been designed to help control this overfitting problem.
An R^2 of around 0.77 on the training set is OK, but not great. For a really good fit, we would want an R^2 of around 0.95 or so. This tells us we are missing someting. One possibility is we could do some feature engineering and either include polynomial powers of some of the features and/or include products of some of the features.
Also, linear models tend ot work better when all of the features exist on roughly the same scale, we could attempt to scale our data as well.
Preprocessing and Scaling
Some algorithms, like neural networks, SVMs, and k-NearestNeighbors are very sensitive to the scaling of the data; while many others such as linear models with regularization (Ridge, Lasso, etc.) are moderately sensitive to the scaling of the data. Therefore, a common practice is to adjust the features so that the data representation is more suitable for these algorithms. Often, this is a simple per-feature rescaling and shift of the data.
Different Kinds of Preprocessing
Differnt algorithms benefit from different kinds of scaling and thus Scikit-Learn supports a variety of scaling methods, though they all have a similar API.
StandardScaler
Neural networks expect all input features to vary in a similar way, and ideally to have a mean of 0, and a variance of 1. When using ANN, we must rescale our data so that it fulfills these requirements. For doing this automatically, scikit-learn has the StandardScaler. The StandardScaler in scikit-learn ensures that for each feature the mean is 0 and the variance is 1, bringing all features to the same magnitude. However, this scaling does not ensure any particular minimum and maximum values for the features.
MinMaxScaler
A common rescaling method for kernel SVMs is to scale the data such that all features are between 0 and 1. We can do this in scikit-learn by using the MinMaxScaler preprocessing method. The MinMaxScaler shifts the data such that all features are exactly between 0 and 1. For a two-dimensional dataset this means all of the data is contained within the rectangle created by the x-axis between 0 and 1 and the y-axis between 0 and 1.
RobustScaler
Standard scaling does not ensure any particular minimum and maximum values for the features. The RobustScaler works similarly to the StandardScaler in that it ensures statistical properties for each feature that guarantee that they are on the same scale. However, the RobustScaler uses the median and quartiles, instead of mean and variance. This makes the RobustScaler ignore data points that are very different from the rest (like measurement errors). These odd data points are also called outliers, and can lead to trouble for other scaling techniques.
Step10: Ordinary Least Squares (OLS) regression is not sensitive to feature scaling, but all of the regularized linear methods which help reduce the overfitting present in OLS are sensitive to feature scaling.
Feature Engineering
Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. Feature engineering is fundamental to the application of machine learning, and is both difficult and expensive. The need for manual feature engineering can be obviated by automated feature learning.
In particular, linear models might benefit greatly from generating new features via techniques such as binning, and adding polynomials and interactions. However, more complex models like random forests and SVMs might be able to learn more complex tasks without explicitly expanding the feature space.
In practice, the features that are used (and the match between features and method) is often the most important piece in making a machine learning approach work well.
Interactions and Polynomials
One way to enrich a feature representation, particularly for linear models, is adding interaction features - products of individual original features. Another way to enrich a feature representation is to use polynomials of the original features - for a given feature x, we might want to consider x^2, x^3, x^4, and so on. This kind of feature engineering is often used in statistical modeling, but it’s also common in many practical machine learning applications.
Within scikit-learn, the addition of both interaction features and polynomial features is implemented in PolynomialFeatures in the preprocessing module.
In the code below, we modify the boston housing dataset by addig all polynomial features and interactions up to a degree of 2. The data originally had 13 features, which were expanded into 105 interaction features. These new features represent all possible interactions between two different original features, as well as the square of each original feature. degree=2 here means that we look at all features that are the product of up to two original features. The exact correspondence between input and output features can be found using the get_feature_names method.
Step11: Now the basic OLS model is doing a dramatically better job fitting the training set (R^2 of 0.95 vs 0.77).
This discrepancy between performance on the training set and the test set is a clear sign of overfitting, and therefore we should try to find a model that allows us to control complexity. One of the most commonly used alternatives to standard linear regression is ridge regression, which we will look into next.
Ridge Regression
Ridge regression is also a linear model for regression, so the formula it uses to make predictions is the same one used for ordinary least squares. In ridge regression, though, the coefficients (w) are chosen not only so that they predict well on the training data, but also to fit an additional constraint. We also want the magnitude of coefficients to be as small as possible; in other words, all entries of w should be close to zero. Intuitively, this means each feature should have as little effect on the outcome as possible (which translates to having a small slope), while still predicting well. This constraint is an example of what is called regularization. Regularization means explicitly restricting a model to avoid overfitting. The particular kind used by ridge regression is known as L2 regularization.
Ridge regression is implemented in linear_model.Ridge. Let’s see how well it does on the extended Boston Housing dataset
Step12: As you can see, the training set score of Ridge is lower than for LinearRegression, while the test set score is higher. This is consistent with our expectation. With linear regression, we were overfitting our data. Ridge is a more restricted model, so we are less likely to overfit. A less complex model means worse performance on the training set, but better generalization. As we are only interested in generalization performance, we should choose the Ridge model over the LinearRegression model.
The Ridge model makes a trade-off between the simplicity of the model (near-zero coefficients) and its performance on the training set. How much importance the model places on simplicity versus training set performance can be specified by the user, using the alpha parameter. In the previous example, we used the default parameter alpha=1.0. There is no reason why this will give us the best trade-off, though. The optimum setting of alpha depends on the particular dataset we are using. Increasing alpha forces coefficients to move more toward zero, which decreases training set performance but might help generalization. For example
Step13: Decreasing alpha allows the coefficients to be less restricted. For very small values of alpha, coefficients are barely restricted at all, and we end up with a model that resembles LinearRegression
Step14: Here, alpha=0.1 seems to be working well. We could try decreasing alpha even more to improve generalization. For now, notice how the parameter alpha corresponds to the model complexity.
Very shortly we need to think about systematic methods for properly select optimal values for parameters such as alpha.
We can also get a more qualitative insight into how the alpha parameter changes the model by inspecting the coef_ attribute of models with different values of alpha. A higher alpha means a more restricted model, so we expect the entries of coef_ to have smaller magnitude for a high value of alpha than for a low value of alpha. This is confirmed in the plot below
Step15: Clearly, the interactions and polynomial features gave us a good boost in performance when using Ridge. When using a more complex model like a random forest, the story can be a bit different, though. Adding features will benefit linear models the most. For very complex models, adding features may actually slightly decrease the performance.
Machine learning is complex. Often you have to try several experiments and just see what works best.
Model Evaluation and Improvement
To evaluate our supervised models, so far we have split our dataset into a training set and a test set using the train_test_split function, built a model on the training set by calling the fit method, and evaluated it on the test set using the score method, which for classification computes the fraction of correctly classified samples and for regression computes the R^2.
Remember, the reason we split our data into training and test sets is that we are interested in measuring how well our model generalizes to new, previously unseen data. We are not interested in how well our model fit the training set, but rather in how well it can make predictions for data that was not observed during training.
As we saw when exploring Ridge regression, we need a more robust way to assess generalization performance which is capable of automatically choosing optimal values for hyper-parameters such as alpha.
Cross-Validation
Cross-validation is a statistical method of evaluating generalization performance that is more stable and thorough than using a split into a training and a test set. In cross-validation, the data is instead split repeatedly and multiple models are trained. The most commonly used version of cross-validation is k-fold cross-validation, where k is a user-specified number, usually 5 or 10. When performing five-fold cross-validation, the data is first partitioned into five parts of (approximately) equal size, called folds. Next, a sequence of models is trained. The first model is trained using the first fold as the test set, and the remaining folds (2–5) are used as the training set. The model is built using the data in folds 2–5, and then the accuracy is evaluated on fold 1. Then another model is built, this time using fold 2 as the test set and the data in folds 1, 3, 4, and 5 as the training set. This process is repeated using folds 3, 4, and 5 as test sets. For each of these five splits of the data into training and test sets, we compute the accuracy. In the end, we have collected five accuracy values.
Usually, the first fifth of the data is the first fold, the second fifth of the data is the second fold, and so on.
The whole point of cross-validation is to be more robust than a simple train/test split so that the results are not likely to be influenced by a particularly good or bad split of the data. The main disadvantage is that it requires more computation.
Cross-Validation in scikit-learn
Cross-validation is implemented in scikit-learn using the cross_val_score function from the model_selection module. The parameters of the cross_val_score function are the model we want to evaluate, the training data, and the ground-truth labels.
Step16: By default, cross_val_score performs three-fold cross-validation, returning three accuracy values. We can change the number of folds used by changing the cv parameter
Step17: A common way to summarize the cross-validation accuracy is to compute the mean
Step18: Using the mean cross-validation we can conclude that we expect the model to be around 96% accurate on average. Looking at all five scores produced by the five-fold cross-validation, we can also conclude that there is a relatively high variance in the accuracy between folds, ranging from 100% accuracy to 90% accuracy. This could imply that the model is very dependent on the particular folds used for training, but it could also just be a consequence of the small size of the dataset.
Benefits of Cross-Validation
There are several benefits to using cross-validation instead of a single split into a training and a test set. First, remember that train_test_split performs a random split of the data. Imagine that we are “lucky” when randomly splitting the data, and all examples that are hard to classify end up in the training set. In that case, the test set will only contain “easy” examples, and our test set accuracy will be unrealistically high. Conversely, if we are “unlucky,” we might have randomly put all the hard-to-classify examples in the test set and consequently obtain an unrealistically low score. However, when using cross-validation, each example will be in the training set exactly once
Step19: As we can see, a default 3-fold cross-validation performed ok for the first two folds, but horribly bad for the third one.
The fundamental problem here is that if that data isn't organized in a random way, then just taking folds in order doesn't represent a random sampling for each fold. There are multiple possible ways to mitigate this issue.
Stratified k-Fold Cross-Validation
As the simple k-fold strategy would obviously fail for classification problems if the data is organized by target category, scikit-learn does not use it for classification, but rather uses stratified k-fold cross-validation. In stratified cross-validation, we split the data such that the proportions between classes are the same in each fold as they are in the whole dataset.
scikit-learn supports startified k-fold cross-validation via the StratifiedKFold class in the model_selection module.
For example, if 90% of your samples belong to class A and 10% of your samples belong to class B, then stratified cross-validation ensures that in each fold, 90% of samples belong to class A and 10% of samples belong to class B.
For regression, scikit-learn uses the standard k-fold cross-validation by default.
Shuffle-split cross-validation
Another, very flexible strategy for cross-validation is shuffle-split cross-validation. In shuffle-split cross-validation, each split samples train_size many points for the training set and test_size many (disjoint) point for the test set. This splitting is repeated n_iter times. You can use integers for train_size and test_size to use absolute sizes for these sets, or floating-point numbers to use fractions of the whole dataset.
Since the sampling in shuffle-split cross-validation is done in a random fashion, this is a safer alternative to default k-Fold Cross-Validation when the data isn't truly randomized.
scikit-learn supports shuffle-split cross-validation via the ShuffleSplit class in the model_selection module.
There is also a stratified variant of ShuffleSplit, aptly named StratifiedShuffleSplit, which can provide more reliable results for classification tasks.
Step20: Grid Search
Now that we know how to evaluate how well a model generalizes, we can take the next step and improve the model’s generalization performance by tuning its parameters. We discussed the parameter settings of the Ridge model for ridge regression earlier. Finding the values of the important parameters of a model (the ones that provide the best generalization performance) is a tricky task, but necessary for almost all models and datasets. Because it is such a common task, there are standard methods in scikit-learn to help you with it. The most commonly used method is grid search, which basically means trying all possible combinations of the parameters of interest.
Consider the case of ridge regression, as implemented in the Ridge class. As we discussed earlier, there is one important parameters
Step21: The Danger of Overfitting the Parameters and the Validation Set
Given this result, we might be tempted to report that we found a model that performs with 78% accuracy on our dataset. However, this claim could be overly optimistic (or just wrong), for the following reason
Step22: The best score on the validation set is 92%. However, the score on the test set—the score that actually tells us how well we generalize—is lower, at 78%. So we can claim to classify new data 78% correctly. This happens to be the same as before, now we can make a stronger claim since the final test set wasn't used in any way shape or form during hyper-parameter tuning.
The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy “leak” information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation—this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is.
Grid Search with Cross-Validation
While the method of splitting the data into a training, a validation, and a test set that we just saw is workable, and relatively commonly used, it is quite sensitive to how exactly the data is split. From the output of the previous code snippet we can see that GridSearchCV selects 'alhpa'
Step23: To evaluate the accuracy of the Ridge Regression model using a particular setting of alpha using five-fold cross-validation, we need to train 11 * 5 = 55 models. As you can imagine, the main downside of the use of cross-validation is the time it takes to train all these models. However, as you can see here, it is a more reliable method which is less sensitive to how precisely the validation set is sampled from the overall trainin set, and thus more likely to generalize well.
GridSearchCV
Because grid search with cross-validation is such a commonly used method to adjust parameters, scikit-learn provides the GridSearchCV class, which implements it in the form of an estimator. To use the GridSearchCV class, you first need to specify the parameters you want to search over using a dictionary. GridSearchCV will then perform all the necessary model fits. The keys of the dictionary are the names of parameters we want to adjust (as given when constructing the model—in this case, alpha), and the values are the parameter settings we want to try out. Trying the values 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, and 100 for alpha translates to the following dictionary
Step24: We can now instantiate the GridSearchCV class with the model (Ridge), the parameter grid to search (param_grid), and the cross-validation strategy we want to use (say, five-fold stratified cross-validation)
Step25: GridSearchCV will use cross-validation in place of the split into a training and validation set that we used before. However, we still need to split the data into a training and a test set, to avoid overfitting the parameters
Step26: The grid_search object that we created behaves just like a classifier; we can call the standard methods fit, predict, and score on it. However, when we call fit, it will run cross-validation for each combination of parameters we specified in param_grid
Step27: Fitting the GridSearchCV object not only searches for the best parameters, but also automatically fits a new model on the whole training dataset with the parameters that yielded the best cross-validation performance. What happens in fit is therefore equivalent to the result of the code we saw at the beginning of this section. The GridSearchCV class provides a very convenient interface to access the retrained model using the predict and score methods. To evaluate how well the best found parameters generalize, we can call score on the test set
Step28: Choosing the parameters using cross-validation, we actually found a model that achieves 77% accuracy on the test set. The important thing here is that we did not use the test set to choose the parameters. The parameters that were found are scored in the best_params_ attribute, and the best cross-validation accuracy (the mean accuracy over the different splits for this parameter setting) is stored in best_score_
Step29: Sometimes it is helpful to have access to the actual model that was found—for example, to look at coefficients or feature importances. You can access the model with the best parameters trained on the whole training set using the best_estimator_ attribute
Step30: Because grid_search itself has predict and score methods, using best_estimator_ is not needed to make predictions or evaluate the model.
Putting it all together
The one thing we didn't do was experiment with different train/test splits. Let's run it with randomness a bunch of times and see how consistent it is | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Classification and Regression
There are two major types of supervised machine learning problems, called classification and regression.
In classification, the goal is to predict a class label, which is a choice from a predefined list of possibilities. In Intro_to_Decision_Trees.ipynb we used the example of classifying irises into one of three possible species. Classification is sometimes separated into binary classification, which is the special case of distinguishing between exactly two classes, and multiclass classification, which is classification between more than two classes. You can think of binary classification as trying to answer a yes/no question. Classifying emails as either spam or not spam is an example of a binary classification problem. In this binary classification task, the yes/no question being asked would be “Is this email spam?”
For regression tasks, the goal is to predict a continuous number, or a floating-point number in programming terms (or real number in mathematical terms). Predicting a person’s annual income from their education, their age, and where they live is an example of a regression task. When predicting income, the predicted value is an amount, and can be any number in a given range. Another example of a regression task is predicting the yield of a corn farm given attributes such as previous yields, weather, and number of employees working on the farm. The yield again can be an arbitrary number.
An easy way to distinguish between classification and regression tasks is to ask whether there is some kind of continuity in the output. If there is continuity between possible outcomes, then the problem is a regression problem. Think about predicting annual income. There is a clear continuity in the output. Whether a person makes $40,000 or $40,001 a year does not make a tangible difference, even though these are different amounts of money; if our algorithm predicts $39,999 or $40,001 when it should have predicted $40,000, we don’t mind that much.
By contrast, for the task of recognizing the language of a website (which is a classification problem), there is no matter of degree. A website is in one language, or it is in another. There is no continuity between languages, and there is no language that is between English and French.
Disclaimer: Much of the code in this notebook was lifted from the excellent book Introduction to Machine Learning with Python by Andreas Muller and Sarah Guido.
Generalization, Overfitting, and Underfitting
In supervised learning, we want to build a model on the training data and then be able to make accurate predictions on new, unseen data that has the same characteristics as the training set that we used. If a model is able to make accurate predictions on unseen data, we say it is able to generalize from the training set to the test set. We want to build a model that is able to generalize as accurately as possible.
Usually we build a model in such a way that it can make accurate predictions on the training set. If the training and test sets have enough in common, we expect the model to also be accurate on the test set. However, there are some cases where this can go wrong. For example, if we allow ourselves to build very complex models, we can always be as accurate as we like on the training set.
The only measure of whether an algorithm will perform well on new data is the evaluation on the test set. However, intuitively we expect simple models to generalize better to new data. Therefore, we always want to find the simplest model. Building a model that is too complex for the amount of information we have, as our novice data scientist did, is called overfitting. Overfitting occurs when you fit a model too closely to the particularities of the training set and obtain a model that works well on the training set but is not able to generalize to new data. On the other hand, if your model is too simple, then you might not be able to capture all the aspects of and variability in the data, and your model will do badly even on the training set. Choosing too simple a model is called underfitting.
The more complex we allow our model to be, the better we will be able to predict on the training data. However, if our model becomes too complex, we start focusing too much on each individual data point in our training set, and the model will not generalize well to new data.
There is a sweet spot in between that will yield the best generalization performance. This is the model we want to find.
Relation of Model Complexity to Dataset Size
It’s important to note that model complexity is intimately tied to the variation of inputs contained in your training dataset: the larger variety of data points your dataset contains, the more complex a model you can use without overfitting. Usually, collecting more data points will yield more variety, so larger datasets allow building more complex models. However, simply duplicating the same data points or collecting very similar data will not help.
Having more data and building appropriately more complex models can often work wonders for supervised learning tasks. In the real world, you often have the ability to decide how much data to collect, which might be more beneficial than tweaking and tuning your model. Never underestimate the power of more data.
Linear Models
Linear models are a class of models that are widely used in practice and have been studied extensively in the last few decades, with roots going back over a hundred years. Linear models make a prediction using a linear function of the input features.
Linear Models for Regression
For regression, the general prediction formula for a linear model looks as follows:
ŷ = w[0] * x[0] + w[1] * x[1] + ... + w[p] * x[p] + b
Here, x[0] to x[p] denotes the features (in this example, the number of features is p) of a single data point, w and b are parameters of the model that are learned, and ŷ is the prediction the model makes. For a dataset with a single feature, this is:
ŷ = w[0] * x[0] + b
which you might remember from high school mathematics as the equation for a line. Here, w[0] is the slope and b is the y-axis offset. For more features, w contains the slopes along each feature axis. Alternatively, you can think of the predicted response as being a weighted sum of the input features, with weights (which can be negative) given by the entries of w.
Linear models for regression can be characterized as regression models for which the prediction is a line for a single feature, a plane when using two features, or a hyperplane in higher dimensions (that is, when using more features).
For datasets with many features, linear models can be very powerful. In particular, if you have more features than training data points, any target y can be perfectly modeled (on the training set) as a linear function.
There are many different linear models for regression. The difference between these models lies in how the model parameters w and b are learned from the training data, and how model complexity can be controlled.
Linear Regression (aka Ordinary Least Squares)
Linear regression, or ordinary least squares (OLS), is the simplest and most classic linear method for regression. Linear regression finds the parameters w and b that minimize the mean squared error between predictions and the true regression targets, y, on the training set. The mean squared error is the sum of the squared differences between the predictions and the true values. Linear regression has no parameters, which is a benefit, but it also has no way to control model complexity.
The scikit-learn documentation on [Linear Regression]http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares) has a decent basic example of its use.
Advantages of Linear Regression (general, not specific to OLS)
Simple to understand and to interpret, at least for a small number of features/dimensions
Easy to visualize for 2 or 3 features
Very fast to train and also fast to predict
Doesn't suffer from the curse of dimensionality that methods such as KNearsetNeighbors does
Actually linear methods tend to work better with lots of features than with a small number of features
Big Disadvantage specific to OLS, but not applicable to linear regresison in general
OLS has no way to control model complexity and can suffer from overfitting, particularly if there are a large number of features
Modified versions of Linear Regression such as Ridge Regression and Lasso can mitigate or fix this issue
Disadvantages of Linear Regression in general, not specific to OLS
In lower-dimensional spaces, other models might yield better generalization performance
Requires more data preparation than some other techniques
Feature normalization is required for best results (for any algorithm which includes regularization)
Non-ordinal categorical features need to be one-hot encoded
Ordinal features need to be numerically encoded
End of explanation
from sklearn.datasets import load_boston
boston = load_boston()
print("Keys of boston: {}".format(boston.keys()))
# The value of the key DESCR is a short description of the dataset. Here we show the beinning of the description.
print(boston['DESCR'][:193] + "\n...")
# The value of feature_names is a list of strings, giving the abbreviated name of each feature
print("Feature names: {}".format(boston['feature_names']))
# The data itself is contained in the target and data fields.
# data contains the numeric measurements of features in a NumPy array
print("Type of data: {}".format(type(boston['data'])))
# The rows in the data array correspond to neighborhoods, while the columns represent the features
print("Shape of data: {}".format(boston['data'].shape))
# We see that the array contains measurements for 506 different neighborhoods. Here are values for the first 5.
print("First five columns of data:\n{}".format(boston['data'][:5]))
# The target array contains the Median value of owner-occupied homes in $1000's, also as a NumPy array
print("Type of target: {}".format(type(boston['target'])))
# target is a one-dimensional array, with one entry per sample
print("Shape of target: {}".format(boston['target'].shape))
# The target values are positive floating point numbers which represent a median house value in thousands of dollars.
print("Target:\n{}".format(boston['target']))
Explanation: A First Application: Predicting Boston Housing Prices
One of the most famous datasets for regression in a supervised learning setting is the Boston Housing data set. It is a multivariate dataset introduced in a 1978 paper which records 13 attributes concerning housing values in the suburbs of Boston. NOTE: The data is very, very old and the house prices are ridiculously low by today's standards.
scikit-learn has a number of small toy datasets included with it which makes it quick and easy to experiment with different machine learning algorithms on these datasets.
The sklearn.datasets.load_boston() method can be used to load the this dataset.
Meet the data
The boston object that is returned by load_boston is a Bunch object, which is very similar to a dictionary. It contains keys and values.
Feature Information:
1. CRIM: per capita crime rate by town
2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft.
3. INDUS: proportion of non-retail business acres per town
4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
5. NOX: nitric oxides concentration (parts per 10 million)
6. RM: average number of rooms per dwelling
7. AGE: proportion of owner-occupied units built prior to 1940
8. DIS: weighted distances to five Boston employment centres
9. RAD: index of accessibility to radial highways
10. TAX: full-value property-tax rate per $10,000
11. PTRATIO: pupil-teacher ratio by town
12. B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
13. LSTAT: % lower status of the population
Target Information
14. MEDV: Median value of owner-occupied homes in $1000's
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(boston['data'], boston['target'], random_state=0)
print("X_train shape: {}".format(X_train.shape))
print("y_train shape: {}".format(y_train.shape))
print("X_test shape: {}".format(X_test.shape))
print("y_test shape: {}".format(y_test.shape))
Explanation: Measuring Success: Training and testing data
We want to build a machine learning model from this data that can predict the species of iris for a new set of measurements. But before we can apply our model to new measurements, we need to know whether it actually works -- that is, whether we should trust its predictions.
Unfortunately, we cannot use the data we used to build the model to evaluate it. This is because our model can always simply remember the whole training set, and will therefore always predict the correct label for any point in the training set. This "remembering" does not indicate to us whether the model will generalize well (in other words, whether it will also perform well on new data).
To assess the model's performance, we show it new data (data that it hasn't seen before) for which we have labels. This is usually done by splitting the labeled data we have collected (here, our 150 flower measurements) into two parts. One part of the data is used to build our machine learning model, and is called the training data or training set. The rest of the data will be used to assess how well the model works; this is called the test data, test set, or hold-out set.
scikit-learn contains a function that shuffles the dataset and splits it for you: the train_test_split function. This function extracts 75% of the rows in the data as the training set, together with the corresponding labels for this data. The remaining 25% of the data, together with the remaining labels, is declared as the test set. Deciding how much data you want to put into the training and the test set respectively is somewhat arbitrary, but scikit-learn's default 75/25 split is a reasonable starting point.
In scikit-learn, data is usually denoted with a capital X, while labels are denoted by a lowercase y. This is inspired by the standard formulation f(x)=y in mathematics, where x is the input to a function and y is the output. Following more conventions from mathematics, we use a capital X because the data is a two-dimensional array (a matrix) and a lowercase y because the target is a one-dimensional array (a vector).
Before making the split, the train_test_split function shuffles the dataset using a pseudorandom number generator. If we just took the last 25% of the data as a test set, all the data points would have the label 2, as the data points are sorted by the label.
To make sure this example code will always get the same output if run multiple times, we provide the pseudorandom number generator with a fixed seed using the random_state parameter.
The output of the train_test_split function is X_train, X_test, y_train, and y_test, which are all NumPy arrays. X_train contains 75% of the rows of the dataset, and X_test contains the remaining 25%.
End of explanation
# create dataframe from data in X_train
boston_df = pd.DataFrame(X_train, columns=boston.feature_names)
# Add in the target data
boston_df['MEDV'] = y_train
# Look at the first few rows
boston_df.head()
# create a scatter matrix from the dataframe
tmp = pd.scatter_matrix(boston_df, figsize=(15, 15))
Explanation: First things first: Look at your data
Before building a machine learning model, it is often a good idea to inspect the data, to see if the task is easily solvable without machine learning, or if the desired information might not be contained in the data.
Additionally, inspecting the data is a good way to find abnormalities and peculiarities. Maybe some of your irises were measured using inches and not centimeters, for example. In the real world, inconsistencies in the data and unexpected measurements are very common, as are missing data and not-a-number (NaN) or infinite values.
One of the best ways to inspect data is to visualize it. One way to do this is by using a scatter plot. A scatter plot of the data puts one feature along the x-axis and another along the y-axis, and draws a dot for each data point. Unfortunately, computer screens have only two dimensions, which allows us to plot only two (or maybe three) features at a time. It is difficult to plot datasets with more than three features this way. One way around this problem is to do a pair plot, which looks at all possible pairs of features. If you have a small number of features, such as the four we have here, this is quite reasonable. You should keep in mind, however, that a pair plot does not show the interaction of all of the features at once, so some interesting aspects of the data may not be revealed when visualizing it this way.
In Python, the pandas library has a convenient function called scatter_matrix for creating pair plots for a DataFrame.
End of explanation
# Get a high-level overview of the data
boston_df.describe()
# Find which features are most highly correlated with the housing prices
df = boston_df
df['MEDV'] = y_train
df.corr()['MEDV']
Explanation: From the plots, we can see RM has a strong positive linear relationship with MEDV and LSTAT has a strong negative one. This makes sense - the housing price should go up as the number of rooms increases and the housing prices should go down as the percentage of lower class/income families in the neighborhood increases.
End of explanation
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
Explanation: Building your model: Linear Regression
Now we can start building the actual machine learning model. There are many regression algorithms in scikit-learn that we could use. Here we will use Ordinary Least Squares (OLS) Linear Regression because it is easy to understand and interpret.
All machine learning models in scikit-learn are implemented in their own classes, which are called Estimator classes. The Linear Regression algorithm is implemented in the LinearRegression class in the linear_model module. Before we can use the model, we need to instantiate the class into an object. This is when we will set any parameters of the model. The LinearRegression model doesn't have any particular parameters of importance.
End of explanation
lr.fit(X_train, y_train)
Explanation: The lr object encapsulates the algorithm that will be used to build the model from the training data, as well the algorithm to make predictions on new data points. It will also hold the information that the algorithm has extracted from the training data.
To build the model on the training set, we call the fit method of the lr object, which takes as arguments the NumPy array X_train containing the training data and the NumPy array y_train of the corresponding training labels.
End of explanation
print("lr.coef_: {}".format(lr.coef_))
print("lr.intercept_: {}".format(lr.intercept_))
Explanation: The “slope” parameters (w), also called weights or coefficients, are stored in the coef_ attribute, while the offset or intercept (b) is stored in the intercept_ attribute:
End of explanation
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
Explanation: The intercept_ attribute is always a single float number, while the coef_ attribute is a NumPy array with one entry per input feature. As we only have 13 input features in this dataset, lr.coef_ has 13 entries.
Let’s look at the training set and test set performance:
End of explanation
# Scale the boston dataset
from sklearn.preprocessing import MinMaxScaler
X = MinMaxScaler().fit_transform(boston.data)
X_train, X_test, y_train, y_test = train_test_split(X, boston['target'], random_state=0)
lr = LinearRegression().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
Explanation: An R^2 of around 0.64 on the test set is not very good, but we can see that the scores on the training and test sets are are a decent distance apart. This means we are likely overfitting. With higher-dimensional datasets (meaning datasets with a large number of features), linear models become more powerful, and there is a higher chance of overfitting. More complicated linear models such as Ridge Regression and Lasso have been designed to help control this overfitting problem.
An R^2 of around 0.77 on the training set is OK, but not great. For a really good fit, we would want an R^2 of around 0.95 or so. This tells us we are missing someting. One possibility is we could do some feature engineering and either include polynomial powers of some of the features and/or include products of some of the features.
Also, linear models tend ot work better when all of the features exist on roughly the same scale, we could attempt to scale our data as well.
Preprocessing and Scaling
Some algorithms, like neural networks, SVMs, and k-NearestNeighbors are very sensitive to the scaling of the data; while many others such as linear models with regularization (Ridge, Lasso, etc.) are moderately sensitive to the scaling of the data. Therefore, a common practice is to adjust the features so that the data representation is more suitable for these algorithms. Often, this is a simple per-feature rescaling and shift of the data.
Different Kinds of Preprocessing
Differnt algorithms benefit from different kinds of scaling and thus Scikit-Learn supports a variety of scaling methods, though they all have a similar API.
StandardScaler
Neural networks expect all input features to vary in a similar way, and ideally to have a mean of 0, and a variance of 1. When using ANN, we must rescale our data so that it fulfills these requirements. For doing this automatically, scikit-learn has the StandardScaler. The StandardScaler in scikit-learn ensures that for each feature the mean is 0 and the variance is 1, bringing all features to the same magnitude. However, this scaling does not ensure any particular minimum and maximum values for the features.
MinMaxScaler
A common rescaling method for kernel SVMs is to scale the data such that all features are between 0 and 1. We can do this in scikit-learn by using the MinMaxScaler preprocessing method. The MinMaxScaler shifts the data such that all features are exactly between 0 and 1. For a two-dimensional dataset this means all of the data is contained within the rectangle created by the x-axis between 0 and 1 and the y-axis between 0 and 1.
RobustScaler
Standard scaling does not ensure any particular minimum and maximum values for the features. The RobustScaler works similarly to the StandardScaler in that it ensures statistical properties for each feature that guarantee that they are on the same scale. However, the RobustScaler uses the median and quartiles, instead of mean and variance. This makes the RobustScaler ignore data points that are very different from the rest (like measurement errors). These odd data points are also called outliers, and can lead to trouble for other scaling techniques.
End of explanation
from sklearn.datasets import load_boston
from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures, StandardScaler, RobustScaler
def load_extended_boston(scaler='minmax'):
boston = load_boston()
X = boston.data
if 'standard' == scaler:
X = StandardScaler().fit_transform(boston.data)
elif 'robust' == scaler:
X = RobustScaler().fit_transform(boston.data)
else:
X = MinMaxScaler().fit_transform(boston.data)
X = PolynomialFeatures(degree=2).fit_transform(X)
return X, boston.target
X, y = load_extended_boston()
X.shape
# What if we fit this new dataset with a vastly expanded set of features using OLS?
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LinearRegression().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
Explanation: Ordinary Least Squares (OLS) regression is not sensitive to feature scaling, but all of the regularized linear methods which help reduce the overfitting present in OLS are sensitive to feature scaling.
Feature Engineering
Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. Feature engineering is fundamental to the application of machine learning, and is both difficult and expensive. The need for manual feature engineering can be obviated by automated feature learning.
In particular, linear models might benefit greatly from generating new features via techniques such as binning, and adding polynomials and interactions. However, more complex models like random forests and SVMs might be able to learn more complex tasks without explicitly expanding the feature space.
In practice, the features that are used (and the match between features and method) is often the most important piece in making a machine learning approach work well.
Interactions and Polynomials
One way to enrich a feature representation, particularly for linear models, is adding interaction features - products of individual original features. Another way to enrich a feature representation is to use polynomials of the original features - for a given feature x, we might want to consider x^2, x^3, x^4, and so on. This kind of feature engineering is often used in statistical modeling, but it’s also common in many practical machine learning applications.
Within scikit-learn, the addition of both interaction features and polynomial features is implemented in PolynomialFeatures in the preprocessing module.
In the code below, we modify the boston housing dataset by addig all polynomial features and interactions up to a degree of 2. The data originally had 13 features, which were expanded into 105 interaction features. These new features represent all possible interactions between two different original features, as well as the square of each original feature. degree=2 here means that we look at all features that are the product of up to two original features. The exact correspondence between input and output features can be found using the get_feature_names method.
End of explanation
from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge.score(X_test, y_test)))
Explanation: Now the basic OLS model is doing a dramatically better job fitting the training set (R^2 of 0.95 vs 0.77).
This discrepancy between performance on the training set and the test set is a clear sign of overfitting, and therefore we should try to find a model that allows us to control complexity. One of the most commonly used alternatives to standard linear regression is ridge regression, which we will look into next.
Ridge Regression
Ridge regression is also a linear model for regression, so the formula it uses to make predictions is the same one used for ordinary least squares. In ridge regression, though, the coefficients (w) are chosen not only so that they predict well on the training data, but also to fit an additional constraint. We also want the magnitude of coefficients to be as small as possible; in other words, all entries of w should be close to zero. Intuitively, this means each feature should have as little effect on the outcome as possible (which translates to having a small slope), while still predicting well. This constraint is an example of what is called regularization. Regularization means explicitly restricting a model to avoid overfitting. The particular kind used by ridge regression is known as L2 regularization.
Ridge regression is implemented in linear_model.Ridge. Let’s see how well it does on the extended Boston Housing dataset:
End of explanation
ridge10 = Ridge(alpha=10).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge10.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge10.score(X_test, y_test)))
Explanation: As you can see, the training set score of Ridge is lower than for LinearRegression, while the test set score is higher. This is consistent with our expectation. With linear regression, we were overfitting our data. Ridge is a more restricted model, so we are less likely to overfit. A less complex model means worse performance on the training set, but better generalization. As we are only interested in generalization performance, we should choose the Ridge model over the LinearRegression model.
The Ridge model makes a trade-off between the simplicity of the model (near-zero coefficients) and its performance on the training set. How much importance the model places on simplicity versus training set performance can be specified by the user, using the alpha parameter. In the previous example, we used the default parameter alpha=1.0. There is no reason why this will give us the best trade-off, though. The optimum setting of alpha depends on the particular dataset we are using. Increasing alpha forces coefficients to move more toward zero, which decreases training set performance but might help generalization. For example:
End of explanation
ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge01.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge01.score(X_test, y_test)))
Explanation: Decreasing alpha allows the coefficients to be less restricted. For very small values of alpha, coefficients are barely restricted at all, and we end up with a model that resembles LinearRegression:
End of explanation
plt.figure(figsize=(15, 10))
plt.plot(ridge.coef_, 's', label="Ridge alpha=1")
plt.plot(ridge10.coef_, '^', label="Ridge alpha=10")
plt.plot(ridge01.coef_, 'v', label="Ridge alpha=0.1")
plt.plot(lr.coef_, 'o', label="LinearRegression")
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.hlines(0, 0, len(lr.coef_))
plt.ylim(-25, 25)
plt.legend()
plt.show()
Explanation: Here, alpha=0.1 seems to be working well. We could try decreasing alpha even more to improve generalization. For now, notice how the parameter alpha corresponds to the model complexity.
Very shortly we need to think about systematic methods for properly select optimal values for parameters such as alpha.
We can also get a more qualitative insight into how the alpha parameter changes the model by inspecting the coef_ attribute of models with different values of alpha. A higher alpha means a more restricted model, so we expect the entries of coef_ to have smaller magnitude for a high value of alpha than for a low value of alpha. This is confirmed in the plot below:
End of explanation
# Let's evaluate cross-validation on the iris dataset using logistic regression (which is actually classification)
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
iris = load_iris()
logreg = LogisticRegression()
scores = cross_val_score(logreg, iris.data, iris.target)
print("Cross-validation scores: {}".format(scores))
Explanation: Clearly, the interactions and polynomial features gave us a good boost in performance when using Ridge. When using a more complex model like a random forest, the story can be a bit different, though. Adding features will benefit linear models the most. For very complex models, adding features may actually slightly decrease the performance.
Machine learning is complex. Often you have to try several experiments and just see what works best.
Model Evaluation and Improvement
To evaluate our supervised models, so far we have split our dataset into a training set and a test set using the train_test_split function, built a model on the training set by calling the fit method, and evaluated it on the test set using the score method, which for classification computes the fraction of correctly classified samples and for regression computes the R^2.
Remember, the reason we split our data into training and test sets is that we are interested in measuring how well our model generalizes to new, previously unseen data. We are not interested in how well our model fit the training set, but rather in how well it can make predictions for data that was not observed during training.
As we saw when exploring Ridge regression, we need a more robust way to assess generalization performance which is capable of automatically choosing optimal values for hyper-parameters such as alpha.
Cross-Validation
Cross-validation is a statistical method of evaluating generalization performance that is more stable and thorough than using a split into a training and a test set. In cross-validation, the data is instead split repeatedly and multiple models are trained. The most commonly used version of cross-validation is k-fold cross-validation, where k is a user-specified number, usually 5 or 10. When performing five-fold cross-validation, the data is first partitioned into five parts of (approximately) equal size, called folds. Next, a sequence of models is trained. The first model is trained using the first fold as the test set, and the remaining folds (2–5) are used as the training set. The model is built using the data in folds 2–5, and then the accuracy is evaluated on fold 1. Then another model is built, this time using fold 2 as the test set and the data in folds 1, 3, 4, and 5 as the training set. This process is repeated using folds 3, 4, and 5 as test sets. For each of these five splits of the data into training and test sets, we compute the accuracy. In the end, we have collected five accuracy values.
Usually, the first fifth of the data is the first fold, the second fifth of the data is the second fold, and so on.
The whole point of cross-validation is to be more robust than a simple train/test split so that the results are not likely to be influenced by a particularly good or bad split of the data. The main disadvantage is that it requires more computation.
Cross-Validation in scikit-learn
Cross-validation is implemented in scikit-learn using the cross_val_score function from the model_selection module. The parameters of the cross_val_score function are the model we want to evaluate, the training data, and the ground-truth labels.
End of explanation
scores = cross_val_score(logreg, iris.data, iris.target, cv=5)
print("Cross-validation scores: {}".format(scores))
Explanation: By default, cross_val_score performs three-fold cross-validation, returning three accuracy values. We can change the number of folds used by changing the cv parameter:
End of explanation
print("Average cross-validation score: {:.2f}".format(scores.mean()))
Explanation: A common way to summarize the cross-validation accuracy is to compute the mean:
End of explanation
lr = LinearRegression()
scores = cross_val_score(lr, boston.data, boston.target)
print("Cross-validation scores: {}".format(scores))
Explanation: Using the mean cross-validation we can conclude that we expect the model to be around 96% accurate on average. Looking at all five scores produced by the five-fold cross-validation, we can also conclude that there is a relatively high variance in the accuracy between folds, ranging from 100% accuracy to 90% accuracy. This could imply that the model is very dependent on the particular folds used for training, but it could also just be a consequence of the small size of the dataset.
Benefits of Cross-Validation
There are several benefits to using cross-validation instead of a single split into a training and a test set. First, remember that train_test_split performs a random split of the data. Imagine that we are “lucky” when randomly splitting the data, and all examples that are hard to classify end up in the training set. In that case, the test set will only contain “easy” examples, and our test set accuracy will be unrealistically high. Conversely, if we are “unlucky,” we might have randomly put all the hard-to-classify examples in the test set and consequently obtain an unrealistically low score. However, when using cross-validation, each example will be in the training set exactly once: each example is in one of the folds, and each fold is the test set once. Therefore, the model needs to generalize well to all of the samples in the dataset for all of the cross-validation scores (and their mean) to be high.
Having multiple splits of the data also provides some information about how sensitive our model is to the selection of the training dataset. For the iris dataset, we saw accuracies between 90% and 100%. This is quite a range, and it provides us with an idea about how the model might perform in the worst case and best case scenarios when applied to new data.
Another benefit of cross-validation as compared to using a single split of the data is that we use our data more a single split of the data is that we use our data more effectively. When using train_test_split, we usually use 75% of the data for training and 25% of the data for evaluation. When using five-fold cross-validation, in each iteration we can use four-fifths of the data (80%) to fit the model. When using 10-fold cross-validation, we can use nine-tenths of the data (90%) to fit the model. More data will usually result in more accurate models.
The main disadvantage of cross-validation is increased computational cost. As we are now training k models instead of a single model, cross-validation will be roughly k times slower than doing a single split of the data.
It is important to keep in mind that cross-validation is not a way to build a model that can be applied to new data. Cross-validation does not return a model. When calling cross_val_score, multiple models are built internally, but the purpose of cross-validation is only to evaluate how well a given algorithm will generalize when trained on a specific dataset.
Stratified k-Fold Cross-Validation and Other Strategies
Splitting the dataset into k folds by starting with the first one-k-th part of the data, as described in the previous section, might not always be a good idea. For example, let’s have a look at the boston housing dataset:
End of explanation
# Let's look at the boston housing dataset again using shuffle-split cross-validation to ensure random sampling
# The following code splits the dataset into 80% training set and 20% test set for 3 iterations:
from sklearn.model_selection import ShuffleSplit
shuffle_split = ShuffleSplit(test_size=.8, train_size=.2, n_splits=3)
scores = cross_val_score(lr, boston.data, boston.target, cv=shuffle_split)
print("Cross-validation scores:\n{}".format(scores))
Explanation: As we can see, a default 3-fold cross-validation performed ok for the first two folds, but horribly bad for the third one.
The fundamental problem here is that if that data isn't organized in a random way, then just taking folds in order doesn't represent a random sampling for each fold. There are multiple possible ways to mitigate this issue.
Stratified k-Fold Cross-Validation
As the simple k-fold strategy would obviously fail for classification problems if the data is organized by target category, scikit-learn does not use it for classification, but rather uses stratified k-fold cross-validation. In stratified cross-validation, we split the data such that the proportions between classes are the same in each fold as they are in the whole dataset.
scikit-learn supports startified k-fold cross-validation via the StratifiedKFold class in the model_selection module.
For example, if 90% of your samples belong to class A and 10% of your samples belong to class B, then stratified cross-validation ensures that in each fold, 90% of samples belong to class A and 10% of samples belong to class B.
For regression, scikit-learn uses the standard k-fold cross-validation by default.
Shuffle-split cross-validation
Another, very flexible strategy for cross-validation is shuffle-split cross-validation. In shuffle-split cross-validation, each split samples train_size many points for the training set and test_size many (disjoint) point for the test set. This splitting is repeated n_iter times. You can use integers for train_size and test_size to use absolute sizes for these sets, or floating-point numbers to use fractions of the whole dataset.
Since the sampling in shuffle-split cross-validation is done in a random fashion, this is a safer alternative to default k-Fold Cross-Validation when the data isn't truly randomized.
scikit-learn supports shuffle-split cross-validation via the ShuffleSplit class in the model_selection module.
There is also a stratified variant of ShuffleSplit, aptly named StratifiedShuffleSplit, which can provide more reliable results for classification tasks.
End of explanation
X, y = load_extended_boston(scaler='standard')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
print("Size of training set: {} size of test set: {}".format(X_train.shape[0], X_test.shape[0]))
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
ridge.fit(X_train, y_train)
# evaluate the SVC on the test set
score = ridge.score(X_test, y_test)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
print("Best score: {:.2f}".format(best_score))
print("Best parameters: {}".format(best_parameters))
Explanation: Grid Search
Now that we know how to evaluate how well a model generalizes, we can take the next step and improve the model’s generalization performance by tuning its parameters. We discussed the parameter settings of the Ridge model for ridge regression earlier. Finding the values of the important parameters of a model (the ones that provide the best generalization performance) is a tricky task, but necessary for almost all models and datasets. Because it is such a common task, there are standard methods in scikit-learn to help you with it. The most commonly used method is grid search, which basically means trying all possible combinations of the parameters of interest.
Consider the case of ridge regression, as implemented in the Ridge class. As we discussed earlier, there is one important parameters: the regularization parameter, alpha. Say we want to try the values 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, and 100 for alpha. Because we have eleven different settings for alpha and alpha is the only parameter, we have 11 combinations of parameters in total. Looking at all possible combinations creates a table (or grid) of parameter settings for the Ridge regression model.
Simple Grid Search
We can implement a simple grid search just as a for loop over the parameter, training and evaluating a classifier for each value:
End of explanation
X, y = load_extended_boston(scaler='standard')
# split data into train+validation set and test set
X_trainval, X_test, y_trainval, y_test = train_test_split(X, y, random_state=0)
# split train+validation set into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X_trainval, y_trainval, random_state=1)
print("Size of training set: {} size of validation set: {} size of test set:"
" {}\n".format(X_train.shape[0], X_valid.shape[0], X_test.shape[0]))
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
ridge.fit(X_train, y_train)
# evaluate the Ridge on the test set
score = ridge.score(X_valid, y_valid)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
# rebuild a model on the combined training and validation set,
# and evaluate it on the test set
ridge = Ridge(**best_parameters)
ridge.fit(X_trainval, y_trainval)
test_score = ridge.score(X_test, y_test)
print("Best score on validation set: {:.2f}".format(best_score))
print("Best parameters: ", best_parameters)
print("Test set score with best parameters: {:.2f}".format(test_score))
Explanation: The Danger of Overfitting the Parameters and the Validation Set
Given this result, we might be tempted to report that we found a model that performs with 78% accuracy on our dataset. However, this claim could be overly optimistic (or just wrong), for the following reason: we tried many different parameters and selected the one with best accuracy on the test set, but this accuracy won’t necessarily carry over to new data. Because we used the test data to adjust the parameters, we can no longer use it to assess how good the model is. This is the same reason we needed to split the data into training and test sets in the first place; we need an independent dataset to evaluate, one that was not used to create the model.
One way to resolve this problem is to split the data again, so we have three sets: the training set to build the model, the validation (or development) set to select the parameters of the model, and the test set to evaluate the performance of the selected parameters.
After selecting the best parameters using the validation set, we can rebuild a model using the parameter settings we found, but now training on both the training data and the validation data. This way, we can use as much data as possible to build our model. This leads to the following implementation:
End of explanation
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
# perform cross-validation
scores = cross_val_score(ridge, X_trainval, y_trainval, cv=5)
# compute mean cross-validation accuracy
score = np.mean(scores)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
# rebuild a model on the combined training and validation set,
# and evaluate it on the test set
ridge = Ridge(**best_parameters)
ridge.fit(X_trainval, y_trainval)
test_score = ridge.score(X_test, y_test)
print("Best score on validation set: {:.2f}".format(best_score))
print("Best parameters: ", best_parameters)
print("Test set score with best parameters: {:.2f}".format(test_score))
Explanation: The best score on the validation set is 92%. However, the score on the test set—the score that actually tells us how well we generalize—is lower, at 78%. So we can claim to classify new data 78% correctly. This happens to be the same as before, now we can make a stronger claim since the final test set wasn't used in any way shape or form during hyper-parameter tuning.
The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy “leak” information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation—this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is.
Grid Search with Cross-Validation
While the method of splitting the data into a training, a validation, and a test set that we just saw is workable, and relatively commonly used, it is quite sensitive to how exactly the data is split. From the output of the previous code snippet we can see that GridSearchCV selects 'alhpa': 50 as the best parameter. But if we were to take a different part of the training data as the validation set, it may optimize for a different value. For a better estimate of the generalization performance, instead of using a single split into a training and a validation set, we can use cross-validation to evaluate the performance of each parameter combination. This method can be coded up as follows:
End of explanation
param_grid = {'alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]}
print("Parameter grid:\n{}".format(param_grid))
Explanation: To evaluate the accuracy of the Ridge Regression model using a particular setting of alpha using five-fold cross-validation, we need to train 11 * 5 = 55 models. As you can imagine, the main downside of the use of cross-validation is the time it takes to train all these models. However, as you can see here, it is a more reliable method which is less sensitive to how precisely the validation set is sampled from the overall trainin set, and thus more likely to generalize well.
GridSearchCV
Because grid search with cross-validation is such a commonly used method to adjust parameters, scikit-learn provides the GridSearchCV class, which implements it in the form of an estimator. To use the GridSearchCV class, you first need to specify the parameters you want to search over using a dictionary. GridSearchCV will then perform all the necessary model fits. The keys of the dictionary are the names of parameters we want to adjust (as given when constructing the model—in this case, alpha), and the values are the parameter settings we want to try out. Trying the values 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, and 100 for alpha translates to the following dictionary:
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
grid_search = GridSearchCV(Ridge(), param_grid, cv=5)
Explanation: We can now instantiate the GridSearchCV class with the model (Ridge), the parameter grid to search (param_grid), and the cross-validation strategy we want to use (say, five-fold stratified cross-validation):
End of explanation
X, y = load_extended_boston(scaler='standard')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
Explanation: GridSearchCV will use cross-validation in place of the split into a training and validation set that we used before. However, we still need to split the data into a training and a test set, to avoid overfitting the parameters:
End of explanation
grid_search.fit(X_train, y_train)
Explanation: The grid_search object that we created behaves just like a classifier; we can call the standard methods fit, predict, and score on it. However, when we call fit, it will run cross-validation for each combination of parameters we specified in param_grid:
End of explanation
print("Test set score: {:.2f}".format(grid_search.score(X_test, y_test)))
Explanation: Fitting the GridSearchCV object not only searches for the best parameters, but also automatically fits a new model on the whole training dataset with the parameters that yielded the best cross-validation performance. What happens in fit is therefore equivalent to the result of the code we saw at the beginning of this section. The GridSearchCV class provides a very convenient interface to access the retrained model using the predict and score methods. To evaluate how well the best found parameters generalize, we can call score on the test set:
End of explanation
print("Best parameters: {}".format(grid_search.best_params_))
print("Best cross-validation score: {:.2f}".format(grid_search.best_score_))
Explanation: Choosing the parameters using cross-validation, we actually found a model that achieves 77% accuracy on the test set. The important thing here is that we did not use the test set to choose the parameters. The parameters that were found are scored in the best_params_ attribute, and the best cross-validation accuracy (the mean accuracy over the different splits for this parameter setting) is stored in best_score_:
End of explanation
print("Best estimator:\n{}".format(grid_search.best_estimator_))
Explanation: Sometimes it is helpful to have access to the actual model that was found—for example, to look at coefficients or feature importances. You can access the model with the best parameters trained on the whole training set using the best_estimator_ attribute:
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
param_grid = {'alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]}
grid_search = GridSearchCV(Ridge(), param_grid, cv=5)
X, y = load_extended_boston(scaler='standard')
for i in range(10):
X_train, X_test, y_train, y_test = train_test_split(X, y)
grid_search.fit(X_train, y_train)
print("Run {} - Test set score: {:.2f} Best parameters: {}".format(i, grid_search.score(X_test, y_test),
grid_search.best_params_))
Explanation: Because grid_search itself has predict and score methods, using best_estimator_ is not needed to make predictions or evaluate the model.
Putting it all together
The one thing we didn't do was experiment with different train/test splits. Let's run it with randomness a bunch of times and see how consistent it is:
End of explanation |
15,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0.
Train a linear regression model
In this notebook, we are going to use the tensor module from PySINGA to train a linear regression model. We use this example to illustrate the usage of tensor of PySINGA. Please refer the documentation page to for more tensor functions provided by PySINGA.
Step1: To import the tensor module of PySINGA, run
Step2: The ground-truth
Our problem is to find a line that fits a set of 2-d data points.
We first plot the ground truth line,
Step3: Generating the trainin data
Then we generate the training data points by adding a random error to sampling points from the ground truth line.
30 data points are generated.
Step4: Training via SGD
Assuming that we know the training data points are sampled from a line, but we don't know the line slope and intercept. The training is then to learn the slop (k) and intercept (b) by minimizing the error, i.e. ||kx+b-y||^2.
1. we set the initial values of k and b (could be any values).
2. we iteratively update k and b by moving them in the direction of reducing the prediction error, i.e. in the gradient direction. For every iteration, we plot the learned line.
Step5: SINGA tensor module supports basic linear algebra operations, like + - * /, and advanced functions including axpy, gemm, gemv, and random function (e.g., Gaussian and Uniform).
SINGA Tensor instances could be created via tensor.Tensor() by specifying the shape, and optionally the device and data type. Note that every Tensor instance should be initialized (e.g., via set_value() or random functions) before reading data from it. You can also create Tensor instances from numpy arrays,
numpy array could be converted into SINGA tensor via tensor.from_numpy(np_ary)
SINGA tensor could be converted into numpy array via tensor.to_numpy(); Note that the tensor should be on the host device. tensor instances could be transferred from other devices to host device via to_host()
Users cannot read a single cell of the Tensor instance. To read a single cell, users need to convert the Tesnor into a numpy array.
Step6: We can see that the learned line is becoming closer to the ground truth line (in blue color).
Next | Python Code:
from __future__ import division
from __future__ import print_function
from builtins import range
from past.utils import old_div
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0.
Train a linear regression model
In this notebook, we are going to use the tensor module from PySINGA to train a linear regression model. We use this example to illustrate the usage of tensor of PySINGA. Please refer the documentation page to for more tensor functions provided by PySINGA.
End of explanation
from singa import tensor
Explanation: To import the tensor module of PySINGA, run
End of explanation
a, b = 3, 2
f = lambda x: a * x + b
gx = np.linspace(0.,1,100)
gy = [f(x) for x in gx]
plt.plot(gx, gy, label='y=f(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='best')
Explanation: The ground-truth
Our problem is to find a line that fits a set of 2-d data points.
We first plot the ground truth line,
End of explanation
nb_points = 30
# generate training data
train_x = np.asarray(np.random.uniform(0., 1., nb_points), np.float32)
train_y = np.asarray(f(train_x) + np.random.rand(30), np.float32)
plt.plot(train_x, train_y, 'bo', ms=7)
Explanation: Generating the trainin data
Then we generate the training data points by adding a random error to sampling points from the ground truth line.
30 data points are generated.
End of explanation
def plot(idx, x, y):
global gx, gy, axes
# print the ground truth line
axes[idx//5, idx%5].plot(gx, gy, label='y=f(x)')
# print the learned line
axes[idx//5, idx%5].plot(x, y, label='y=kx+b')
axes[idx//5, idx%5].legend(loc='best')
# set hyper-parameters
max_iter = 15
alpha = 0.05
# init parameters
k, b = 2.,0.
Explanation: Training via SGD
Assuming that we know the training data points are sampled from a line, but we don't know the line slope and intercept. The training is then to learn the slop (k) and intercept (b) by minimizing the error, i.e. ||kx+b-y||^2.
1. we set the initial values of k and b (could be any values).
2. we iteratively update k and b by moving them in the direction of reducing the prediction error, i.e. in the gradient direction. For every iteration, we plot the learned line.
End of explanation
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
Explanation: SINGA tensor module supports basic linear algebra operations, like + - * /, and advanced functions including axpy, gemm, gemv, and random function (e.g., Gaussian and Uniform).
SINGA Tensor instances could be created via tensor.Tensor() by specifying the shape, and optionally the device and data type. Note that every Tensor instance should be initialized (e.g., via set_value() or random functions) before reading data from it. You can also create Tensor instances from numpy arrays,
numpy array could be converted into SINGA tensor via tensor.from_numpy(np_ary)
SINGA tensor could be converted into numpy array via tensor.to_numpy(); Note that the tensor should be on the host device. tensor instances could be transferred from other devices to host device via to_host()
Users cannot read a single cell of the Tensor instance. To read a single cell, users need to convert the Tesnor into a numpy array.
End of explanation
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
Explanation: We can see that the learned line is becoming closer to the ground truth line (in blue color).
Next: MLP example
End of explanation |
15,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Scikit-learn pipelines
Nearest neighbor search is a fundamental building block of many machine learning algorithms, including in supervised learning with kNN-classifiers and kNN-regressors, and unsupervised learning with manifold learning, and clustering. It would be useful to be able to bring the speed of PyNNDescent's approximate nearest neighbor search to bear on these problems without having to re-implement everything from scratch. Fortunately Scikit-learn has done most of the work for us with their KNeighborsTransformer, which provides a means to insert nearest neighbor computations into sklearn pipelines, and feed the results to many of their models that make use of nearest neighbor computations. It is worth reading through the documentation they have, because we are going to use PyNNDescent as a drop in replacement.
To make this as simple as possible PyNNDescent implements a class PyNNDescentTransformer that acts as a KNeighborsTransformer and can be dropped into all the same pipelines. Let's see an example of this working ...
Step2: As usual we will need some data to play with. In this case let's use a random subsample of MNIST digits.
Step3: Now we need to make a pipeline that feeds the nearest neighbor results into a downstream task. To demonstrate how this can work we'll try manifold learning. First we will try out Isomap and then t-SNE. In both cases we can provide a "precomputed" distance matrix, and if it is a sparse matrix (as output by KNeighborsTransformer) then any entry not explicitly provided as a non-zero element of the matrix will be ignored (or treated as an effectively infinite distance). To make the whole thing work we simple make an sklearn pipeline (and could easily include pre-processing steps such as categorical encoding, or data scaling and standardisation as earlier steps if we wished) that first uses the KNeighborsTransformer to process the raw data into a nearest neighbor graph, and then passes that on to either Isomap or TSNE. For comparison we'll drop in a PyNNDescentTransformer instead and see how that effects the results.
Step4: First let's try Isomap. The algorithm first constructs a k-nearest-neighbor graph (which our transformers will handle in the pipeline), then measures distances between points as path lengths in that graph. Finally it performs an eigendecomposition of the resulting distance matrix. We can do much to speed up the later two steps, which are still non-trivial, but hopefully we can get some speedup by substituting in the approximate nearest neighbor computation.
Step5: A two-times speedup is not bad, especially since we only accelerated one component of the full algorithm. It is quite good considering it was simply a matter of dropping a different class into a pipeline. More importantly as we scale to larger amounts of data the nearest neighbor search comes to dominate the over algorithm run-time, so we can expect to only get better speedups for more data. We can plot the results to ensure we are getting qualitatively the same thing.
Step6: Now let's try t-SNE. This algorithm requires nearest neighbors as a first step, and then the second major part, in terms of computation time, is the optimization of a layout of a modified k-neighbor graph. We can hope for some improvement in the first part, which usually accounts for around half the overall run-time for small data (and comes to consume a majority of the run-time for large datasets).
Step7: Again we have an approximate two-times speedup. Again this was achieved by simply substituting a different class into the pipeline (although in the case we tweaked the early_termination_value so it would stop sooner). Again we can look at the qualitative results and see that we are getting something very similar.
Step8: So the results, in both cases, look pretty good, and we did get a good speed-up. A question remains -- how fast was he nearest neighbor component, and how accurate was it? We can write a simple function to measure the neighbor accuracy
Step9: So for the Isomap case we went from taking over one and half minutes down to less then a second. While doing so we still achieved over 99% accuracy in the nearest neighbors. This seems like a good tradeoff.
By constrast t-SNE requires a much larger number of neighbors (approximately three times the desired perplexity value, which defaults to 30 in sklearn's implementation). This is a little more of a challenge so we might expect it to take longer. | Python Code:
from sklearn.manifold import Isomap, TSNE
from sklearn.neighbors import KNeighborsTransformer
from pynndescent import PyNNDescentTransformer
from sklearn.pipeline import make_pipeline
from sklearn.datasets import fetch_openml
from sklearn.utils import shuffle
import seaborn as sns
Explanation: Working with Scikit-learn pipelines
Nearest neighbor search is a fundamental building block of many machine learning algorithms, including in supervised learning with kNN-classifiers and kNN-regressors, and unsupervised learning with manifold learning, and clustering. It would be useful to be able to bring the speed of PyNNDescent's approximate nearest neighbor search to bear on these problems without having to re-implement everything from scratch. Fortunately Scikit-learn has done most of the work for us with their KNeighborsTransformer, which provides a means to insert nearest neighbor computations into sklearn pipelines, and feed the results to many of their models that make use of nearest neighbor computations. It is worth reading through the documentation they have, because we are going to use PyNNDescent as a drop in replacement.
To make this as simple as possible PyNNDescent implements a class PyNNDescentTransformer that acts as a KNeighborsTransformer and can be dropped into all the same pipelines. Let's see an example of this working ...
End of explanation
def load_mnist(n_samples):
Load MNIST, shuffle the data, and return only n_samples.
mnist = fetch_openml("mnist_784")
X, y = shuffle(mnist.data, mnist.target, random_state=2)
return X[:n_samples] / 255, y[:n_samples]
data, target = load_mnist(10000)
Explanation: As usual we will need some data to play with. In this case let's use a random subsample of MNIST digits.
End of explanation
sklearn_isomap = make_pipeline(
KNeighborsTransformer(n_neighbors=15),
Isomap(metric='precomputed')
)
pynnd_isomap = make_pipeline(
PyNNDescentTransformer(n_neighbors=15),
Isomap(metric='precomputed')
)
sklearn_tsne = make_pipeline(
KNeighborsTransformer(n_neighbors=92),
TSNE(metric='precomputed', random_state=42)
)
pynnd_tsne = make_pipeline(
PyNNDescentTransformer(n_neighbors=92, early_termination_value=0.05),
TSNE(metric='precomputed', random_state=42)
)
Explanation: Now we need to make a pipeline that feeds the nearest neighbor results into a downstream task. To demonstrate how this can work we'll try manifold learning. First we will try out Isomap and then t-SNE. In both cases we can provide a "precomputed" distance matrix, and if it is a sparse matrix (as output by KNeighborsTransformer) then any entry not explicitly provided as a non-zero element of the matrix will be ignored (or treated as an effectively infinite distance). To make the whole thing work we simple make an sklearn pipeline (and could easily include pre-processing steps such as categorical encoding, or data scaling and standardisation as earlier steps if we wished) that first uses the KNeighborsTransformer to process the raw data into a nearest neighbor graph, and then passes that on to either Isomap or TSNE. For comparison we'll drop in a PyNNDescentTransformer instead and see how that effects the results.
End of explanation
%%time
sklearn_iso_map = sklearn_isomap.fit_transform(data)
%%time
pynnd_iso_map = pynnd_isomap.fit_transform(data)
Explanation: First let's try Isomap. The algorithm first constructs a k-nearest-neighbor graph (which our transformers will handle in the pipeline), then measures distances between points as path lengths in that graph. Finally it performs an eigendecomposition of the resulting distance matrix. We can do much to speed up the later two steps, which are still non-trivial, but hopefully we can get some speedup by substituting in the approximate nearest neighbor computation.
End of explanation
sns.scatterplot(x=sklearn_iso_map.T[0], y=sklearn_iso_map.T[1], hue=target, palette="Spectral", size=1);
sns.scatterplot(x=pynnd_iso_map.T[0], y=pynnd_iso_map.T[1], hue=target, palette="Spectral", size=1);
Explanation: A two-times speedup is not bad, especially since we only accelerated one component of the full algorithm. It is quite good considering it was simply a matter of dropping a different class into a pipeline. More importantly as we scale to larger amounts of data the nearest neighbor search comes to dominate the over algorithm run-time, so we can expect to only get better speedups for more data. We can plot the results to ensure we are getting qualitatively the same thing.
End of explanation
%%time
sklearn_tsne_map = sklearn_tsne.fit_transform(data)
%%time
pynnd_tsne_map = pynnd_tsne.fit_transform(data)
Explanation: Now let's try t-SNE. This algorithm requires nearest neighbors as a first step, and then the second major part, in terms of computation time, is the optimization of a layout of a modified k-neighbor graph. We can hope for some improvement in the first part, which usually accounts for around half the overall run-time for small data (and comes to consume a majority of the run-time for large datasets).
End of explanation
sns.scatterplot(x=sklearn_tsne_map.T[0], y=sklearn_tsne_map.T[1], hue=target, palette="Spectral", size=1);
sns.scatterplot(x=pynnd_tsne_map.T[0], y=pynnd_tsne_map.T[1], hue=target, palette="Spectral", size=1);
Explanation: Again we have an approximate two-times speedup. Again this was achieved by simply substituting a different class into the pipeline (although in the case we tweaked the early_termination_value so it would stop sooner). Again we can look at the qualitative results and see that we are getting something very similar.
End of explanation
import numba
import numpy as np
@numba.njit()
def arr_intersect(ar1, ar2):
aux = np.sort(np.concatenate((ar1, ar2)))
return aux[:-1][aux[:-1] == aux[1:]]
@numba.njit()
def neighbor_accuracy_numba(n1_indptr, n1_indices, n2_indptr, n2_indices):
result = 0.0
for i in range(n1_indptr.shape[0] - 1):
indices1 = n1_indices[n1_indptr[i]:n1_indptr[i+1]]
indices2 = n2_indices[n2_indptr[i]:n2_indptr[i+1]]
n_correct = np.float64(arr_intersect(indices1, indices2).shape[0])
result += n_correct / indices1.shape[0]
return result / (n1_indptr.shape[0] - 1)
def neighbor_accuracy(neighbors1, neighbors2):
return neighbor_accuracy_numba(
neighbors1.indptr, neighbors1.indices, neighbors2.indptr, neighbors2.indices
)
%time true_neighbors = KNeighborsTransformer(n_neighbors=15).fit_transform(data)
%time pynnd_neighbors = PyNNDescentTransformer(n_neighbors=15).fit_transform(data)
print(f"Neighbor accuracy is {neighbor_accuracy(true_neighbors, pynnd_neighbors) * 100.0}%")
Explanation: So the results, in both cases, look pretty good, and we did get a good speed-up. A question remains -- how fast was he nearest neighbor component, and how accurate was it? We can write a simple function to measure the neighbor accuracy: compute the average percentage intersection in the neighbor sets of each sample point. Then let's just run the transformers and compare the times as well as computing the actual percentage accuracy.
End of explanation
%time true_neighbors = KNeighborsTransformer(n_neighbors=92).fit_transform(data)
%time pynnd_neighbors = PyNNDescentTransformer(n_neighbors=92, early_termination_value=0.05).fit_transform(data)
print(f"Neighbor accuracy is {neighbor_accuracy(true_neighbors, pynnd_neighbors) * 100.0}%")
Explanation: So for the Isomap case we went from taking over one and half minutes down to less then a second. While doing so we still achieved over 99% accuracy in the nearest neighbors. This seems like a good tradeoff.
By constrast t-SNE requires a much larger number of neighbors (approximately three times the desired perplexity value, which defaults to 30 in sklearn's implementation). This is a little more of a challenge so we might expect it to take longer.
End of explanation |
15,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistical Distribution
Discrete distribution
Contious distribution
Sample(small) distribution
Discrete Distribution
Binomial distribution $B(n,p)$
Hypergeometric distribution
Geometric distribution
Poisson distribution $P(\lambda)$
1.1 Binomial distribution $B(n,p)$
$$ P(X=k) = C_n^k p^k (1-p)^{n-k}, k=0,1,...,n$$
then $ X \sim B(n,p) $.
Step1: 1.2 Hypergeometric distribution
$$ P(X=k) = \frac {C_M^k C_{N-M}^{n-k}} {C_N^n} $$
then $ X \sim$ Hypergeometric distribution with parameters ${N,M,n}$
Step2: 1.3 Geometric distribution
$$ P(X=k) = p(1-p)^{k-1}$$
Step3: 1.4 Poisson distribution $P(\lambda)$
$$ P(X=k) = e^{-\lambda} \frac{\lambda^k}{k!}, k = 0,1,2,... $$
then $ X \sim P(\lambda)$, $\lambda>0$
Step4: Contious distribution
Homogeneous distribution $R(a,b)$
Exponential distribution $E(\lambda)$
Normal distribution $N(\mu,\sigma^2)$
2.1 Homogeneous distribution $R(a,b)$
$$ f(x)=\left{\begin{array}{ll}
c, & a<x<b \
0, & otherwise
\end{array}\right.$$
then $X \sim R(a,b)$
Step5: 2.2 Exponential distribution $E(\lambda)$
$$ f(x)=\left{\begin{array}{ll}
\lambda e^{-\lambda x}, & x>0 \
0, & otherwise
\end{array}\right.$$
then $X \sim E(\lambda)$
Step6: 2.3 Normal distribution $N(\mu,\sigma^2)$
$$ f(x) = \frac {1} {\sqrt{2\pi\sigma}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} $$
then $X \sim N(\mu,\sigma^2)$.
Step7: Sample(small) distribution
$\chi^2$ distribution $\chi^2(n)$
t distribution $t(n)$
F distribution $F(n)$
3.1 $\chi^2$ distribution $\chi^2(n)$
Given $X_i \sim N(0,1)$,
$$ Y = \sum_{i=1}^{N} X_i^2 \sim \chi^2(n) $$
Step8: 3.2 t distribution $t(n)$
Given $X \sim N(0,1)$, $Y \sim \chi^2(n)$,
$$ T \hat{=} \frac{X}{\sqrt{\frac{Y}{n}}} \sim t(n)$$
Step9: 3.3 F distribution $F(n)$
Given $X \sim \chi^2(m)$, $Y \sim \chi^2(n)$,
$$ F \hat{=} \frac{\frac{X}{m}}{\frac{Y}{n}} \sim F(m,n)$$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n,p=50,0.1
plt.hist(np.random.binomial(n,p,size=5000))
plt.show()
Explanation: Statistical Distribution
Discrete distribution
Contious distribution
Sample(small) distribution
Discrete Distribution
Binomial distribution $B(n,p)$
Hypergeometric distribution
Geometric distribution
Poisson distribution $P(\lambda)$
1.1 Binomial distribution $B(n,p)$
$$ P(X=k) = C_n^k p^k (1-p)^{n-k}, k=0,1,...,n$$
then $ X \sim B(n,p) $.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
ngood, nbad, nsamp = 90, 10, 50
plt.hist(np.random.hypergeometric(ngood, nbad, nsamp, 5000))
plt.show()
Explanation: 1.2 Hypergeometric distribution
$$ P(X=k) = \frac {C_M^k C_{N-M}^{n-k}} {C_N^n} $$
then $ X \sim$ Hypergeometric distribution with parameters ${N,M,n}$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.geometric(p=0.35, size=10000))
plt.show()
Explanation: 1.3 Geometric distribution
$$ P(X=k) = p(1-p)^{k-1}$$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.poisson(5, 10000))
plt.show()
Explanation: 1.4 Poisson distribution $P(\lambda)$
$$ P(X=k) = e^{-\lambda} \frac{\lambda^k}{k!}, k = 0,1,2,... $$
then $ X \sim P(\lambda)$, $\lambda>0$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.random_sample(1000))
plt.show()
Explanation: Contious distribution
Homogeneous distribution $R(a,b)$
Exponential distribution $E(\lambda)$
Normal distribution $N(\mu,\sigma^2)$
2.1 Homogeneous distribution $R(a,b)$
$$ f(x)=\left{\begin{array}{ll}
c, & a<x<b \
0, & otherwise
\end{array}\right.$$
then $X \sim R(a,b)$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.exponential(scale=1.0, size=1000))
plt.show()
Explanation: 2.2 Exponential distribution $E(\lambda)$
$$ f(x)=\left{\begin{array}{ll}
\lambda e^{-\lambda x}, & x>0 \
0, & otherwise
\end{array}\right.$$
then $X \sim E(\lambda)$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.normal(size=4000))
plt.show()
Explanation: 2.3 Normal distribution $N(\mu,\sigma^2)$
$$ f(x) = \frac {1} {\sqrt{2\pi\sigma}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} $$
then $X \sim N(\mu,\sigma^2)$.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.chisquare(3,1000))
plt.show()
Explanation: Sample(small) distribution
$\chi^2$ distribution $\chi^2(n)$
t distribution $t(n)$
F distribution $F(n)$
3.1 $\chi^2$ distribution $\chi^2(n)$
Given $X_i \sim N(0,1)$,
$$ Y = \sum_{i=1}^{N} X_i^2 \sim \chi^2(n) $$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.standard_t(2,50))
plt.show()
Explanation: 3.2 t distribution $t(n)$
Given $X \sim N(0,1)$, $Y \sim \chi^2(n)$,
$$ T \hat{=} \frac{X}{\sqrt{\frac{Y}{n}}} \sim t(n)$$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(np.random.f(4,10,5000))
plt.show()
Explanation: 3.3 F distribution $F(n)$
Given $X \sim \chi^2(m)$, $Y \sim \chi^2(n)$,
$$ F \hat{=} \frac{\frac{X}{m}}{\frac{Y}{n}} \sim F(m,n)$$
End of explanation |
15,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to Bayesian Optimization with Emukit
Overview
Step1: Navigation
What is Bayesian optimization?
The ingredients of Bayesian optimization
Emukit's Bayesian optimization interface
References
1. What is Bayesian optimization?
Given a function $f
Step2: The space object defines the input space $X = [0, 1]$, which in this case is purely continuous and only one dimensional. In a later section we will see how we can also apply Bayesian optimization in other domains that contain discrete or categorical parameters.
Of course in reality, evaluating $f$ on a grid wouldn't be possible, but since the forrester function is a synthetic function we can evaluate it here for visualization purposes.
Step3: <h4 id='bo_intro_init_design'> The Intial Design </h4>
Usually, before we start the actual BO loop we need to gather a few observations such that we can fit the model. This is called the initial design and common strategies are either a predefined grid or sampling points uniformly at random.
Step4: <h4 id='bo_intro_model'> The Model </h4>
Now we can start with the BO loop by first fitting a model on the collected data.
The arguably most popular model for BO is a Gaussian process (GP) which defines a probability distribution across classes of functions, typically smooth, such that each linear finite-dimensional restriction is multivariate Gaussian (Rasmussen and Williams, 2006). GPs are fully parametrized by a mean $\mu(x)$ and a covariance function $k(x,x')$. Without loss of generality $\mu(x)$ is assumed to be zero. The covariance function $k(x,x')$ characterizes the smoothness and other properties of $f$. It is known that the kernel of the process and has to be continuous, symmetric and positive definite. A widely used kernel is the squared exponential or RBF kernel
Step5: <h4 id='bo_intro_acquisition'> The Acqusition Function </h4>
In the second step of our BO loop we use our model to compute the acquisition function. Various different acquisition functions exist such as
Step6: <h4 id='bo_intro_eval'> Evaluating the objective function </h4>
To find the next point to evaluate we optimize the acquisition function using a standard gradient descent optimizer.
Step7: Afterwards we evaluate the true objective function and append it to our initial observations.
Step8: After updating the model, you can see that the uncertainty about the true objective function in this region decreases and our model becomes more certain.
Step9: 3. Emukit's Bayesian optimization interface
Of course in practice we don't want to implement all of these steps our self. Emukit provides a convenient and flexible interface to apply Bayesian optimization. Below we can see how to run Bayesian optimization on the exact same function for 10 iterations. | Python Code:
### General imports
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
### --- Figure config
LEGEND_SIZE = 15
Explanation: An Introduction to Bayesian Optimization with Emukit
Overview
End of explanation
from emukit.test_functions import forrester_function
from emukit.core.loop.user_function import UserFunctionWrapper
from emukit.core import ContinuousParameter, ParameterSpace
target_function, space = forrester_function()
Explanation: Navigation
What is Bayesian optimization?
The ingredients of Bayesian optimization
Emukit's Bayesian optimization interface
References
1. What is Bayesian optimization?
Given a function $f: \mathbb{X} \rightarrow \mathbb{R}$ which is defined in some constrained input space $\mathbb{X}$, Bayesian optimization (BO) [Shahriari et al, 2016] tries to find the global minimum $x_{\star} \in \mathbb{X}$ of the function $f$ by solving the global optimization problem :
$$ x_{\star} = \operatorname*{arg\:min}_{x \in \mathbb{X}} f(x). $$
Typically these objective functions $f$ are noisy, i.e $y(x) = f(x) + \epsilon$ with $\epsilon \sim N(0, \sigma_{noise})$ and expensive to evaluate. Additionally we assume that no gradient information is available and hence we treat $f$ as a black-box.
Popular examples for such black-box optimization problems are:
optimizing the hyperparameters of a machine learning algorithms such as for instance a neural network, where each function evaluation requires to train and validate the neural network
optimizing the parameters of a controller for a robot
etc.
There are two crucial bits in Bayesian optimization:
A prior probability measure $p(f)$ which captures our prior beliefs on $f$, called the model. Everytime we observe new data $D$ the prior will be updated to a 'posterior' $p(f|D)$ using the available data.
An acquisition function $a: \mathbb{X} \rightarrow \mathbb{R}$ which for each point in the input space quantifies the utility of evaluating this point. The central idea of the acquisition function is to trade off the exploration in regions of the input space where the model is still uncertain and the exploitation of the model's confidence about the good regions of the input space.
Given these ingredients, BO essentially iterates the following three steps until it achieves a predfined stopping criteria:
1. fit the model $p(f|D_{n})$ on the currently available data $D_{n}$.
2. find the most interesting point to evaluate by $x_{n+1} \in \operatorname*{arg\:max}{x \in \mathbb{X}} a(x)$
3. evaluate the objective function at $x{n+1}$, obtain $y_{n+1}$ and add the new observation to the data $D_{n+1} \leftarrow D_{n} \cup {x_{n+1}, y_{n+1}}$
2. The ingredients of Bayesian optimization
<h4 id='bo_intro_objective'>The Objective Function and the Input Space</h4>
As an example let's assume we want to optimize the one-dimensional forrester function:
$$
(6x - 2)^2\sin(12x - 4)
$$
which is defined over the interval $x \in [0, 1]$.
Conviently, this function is already implemented in Emukit. Note that in order to pass it to other Emukit modules we wrap the function by the UserFunctionWrapper interface.
End of explanation
x_plot = np.linspace(space.parameters[0].min, space.parameters[0].max, 200)[:, None]
y_plot = target_function(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: The space object defines the input space $X = [0, 1]$, which in this case is purely continuous and only one dimensional. In a later section we will see how we can also apply Bayesian optimization in other domains that contain discrete or categorical parameters.
Of course in reality, evaluating $f$ on a grid wouldn't be possible, but since the forrester function is a synthetic function we can evaluate it here for visualization purposes.
End of explanation
X_init = np.array([[0.2],[0.6], [0.9]])
Y_init = target_function(X_init)
plt.figure(figsize=(12, 8))
plt.plot(X_init, Y_init, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: <h4 id='bo_intro_init_design'> The Intial Design </h4>
Usually, before we start the actual BO loop we need to gather a few observations such that we can fit the model. This is called the initial design and common strategies are either a predefined grid or sampling points uniformly at random.
End of explanation
import GPy
from emukit.model_wrappers.gpy_model_wrappers import GPyModelWrapper
gpy_model = GPy.models.GPRegression(X_init, Y_init, GPy.kern.RBF(1, lengthscale=0.08, variance=20), noise_var=1e-10)
emukit_model = GPyModelWrapper(gpy_model)
mu_plot, var_plot = emukit_model.predict(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(X_init, Y_init, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: <h4 id='bo_intro_model'> The Model </h4>
Now we can start with the BO loop by first fitting a model on the collected data.
The arguably most popular model for BO is a Gaussian process (GP) which defines a probability distribution across classes of functions, typically smooth, such that each linear finite-dimensional restriction is multivariate Gaussian (Rasmussen and Williams, 2006). GPs are fully parametrized by a mean $\mu(x)$ and a covariance function $k(x,x')$. Without loss of generality $\mu(x)$ is assumed to be zero. The covariance function $k(x,x')$ characterizes the smoothness and other properties of $f$. It is known that the kernel of the process and has to be continuous, symmetric and positive definite. A widely used kernel is the squared exponential or RBF kernel: $$ k(x,x') = \theta_0 \cdot \exp{ \left(-\frac{\|x-x'\|^2}{\theta_1}\right)} $$ where $\theta_0$ and and $\theta_1$ are hyperparameters.
To denote that $f$ is a sample from a GP with mean $\mu$ and covariance $k$ we write
$$f(x) \sim \mathcal{GP}(\mu(x),k(x,x')).$$
For regression tasks, the most important feature of GPs is that process priors are conjugate to the likelihood from finitely many observations $y = (y_1,\dots,y_n)^T$ and $X ={x_1,...,x_n}$, $x_i\in \mathcal{X}$ of the form $y_i = f(x_i) + \epsilon_i$ where $\epsilon_i \sim \mathcal{N} (0,\sigma_{noise})$ and we estimate $\sigma_{noise}$ by an additional hyperparameter $\theta_2$.
We obtain the Gaussian posterior $f(x^)|X, y, \theta \sim \mathcal{N}(\mu(x^),\sigma^2(x^))$, where $\mu(x^)$ and $\sigma^2(x^*)$ have a close form. See (Rasmussen and Williams, 2006) for more details.
Note that Gaussian process are also characterized by hyperparameters $\theta = {\theta_0, ... \theta_k}$ such as for instance the kernel lengthscales. For simplicitly we keep these hyperparameters fixed here. However, we usually either optimize or sample these hyperparameters using the marginal loglikelihood of the GP. Of course we could also use any other model that returns a mean $\mu(x)$ and variance $\sigma^2(x)$ on an arbitrary input points $x$ such as Bayesian neural networks or random forests.
End of explanation
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement, NegativeLowerConfidenceBound, ProbabilityOfImprovement
ei_acquisition = ExpectedImprovement(emukit_model)
nlcb_acquisition = NegativeLowerConfidenceBound(emukit_model)
pi_acquisition = ProbabilityOfImprovement(emukit_model)
ei_plot = ei_acquisition.evaluate(x_plot)
nlcb_plot = nlcb_acquisition.evaluate(x_plot)
pi_plot = pi_acquisition.evaluate(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, (ei_plot - np.min(ei_plot)) / (np.max(ei_plot) - np.min(ei_plot)), "green", label="EI")
plt.plot(x_plot, (nlcb_plot - np.min(nlcb_plot)) / (np.max(nlcb_plot) - np.min(nlcb_plot)), "purple", label="NLCB")
plt.plot(x_plot, (pi_plot - np.min(pi_plot)) / (np.max(pi_plot) - np.min(pi_plot)), "darkorange", label="PI")
plt.legend(loc=1, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: <h4 id='bo_intro_acquisition'> The Acqusition Function </h4>
In the second step of our BO loop we use our model to compute the acquisition function. Various different acquisition functions exist such as :
Probability of Improvement (PI): Given the currently best observed value $y_{\star} \in \operatorname*{arg\:min} {y_0, \ldots, y_n}$, PI simply maximizes
$$
a_{PI}(x) = \Phi(\gamma(x))
$$
where $\gamma(x) = \frac{y_{\star} - \mu(x)}{\sigma(x)}$ and $\Phi$ is the CDF of a standard normal distribution [Jones et al., 1998].
Negative Lower Confidence Bound (NLCB): This acquisition function is based on the famous upper confidence bound bandit strategy [Srinivas et al., 2009]. It maximized the function:
$$
a_{LCB} = - (\mu(x) - \beta \sigma(x))
$$
where $\beta$ is a user-defined hyperparameter that controls exploitation / exploration.
Expected Improvement (EI): Probably the most often used acquisition function is expected improvement [Jones et al., 1998], which computes:
$$
E_{p(f|D)}[\max(y_{\star} - f(x), 0)].
$$
where $y_{\star} \in \operatorname*{arg\:min} {y_0, \ldots, y_n}$. Assuming $p(f|D)$ to be a Gaussian, we can compute EI in closed form by:
$$
\sigma(x)(\gamma(x)\Phi(\gamma(x))) + \phi(\gamma(x))
$$
here $\gamma(x) = \frac{y_{\star} - \mu(x)}{\sigma(x)}$ and $\Phi$ is the CDF and $\phi$ is the PDF of a standard normal distribution.
All of these acquisition function only rely on the model and hence are cheap to evaluate. Furthermore we can easily compute the gradients and use a simple gradient optimization method to find $x_{n+1} \in \operatorname*{arg\:max}_{x \in \mathbb{X}} a(x)$.
End of explanation
from emukit.core.optimization import GradientAcquisitionOptimizer
optimizer = GradientAcquisitionOptimizer(space)
x_new, _ = optimizer.optimize(ei_acquisition)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, (ei_plot - np.min(ei_plot)) / (np.max(ei_plot) - np.min(ei_plot)), "green", label="EI")
plt.axvline(x_new, color="red", label="x_next", linestyle="--")
plt.legend(loc=1, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: <h4 id='bo_intro_eval'> Evaluating the objective function </h4>
To find the next point to evaluate we optimize the acquisition function using a standard gradient descent optimizer.
End of explanation
y_new = target_function(x_new)
X = np.append(X_init, x_new, axis=0)
Y = np.append(Y_init, y_new, axis=0)
Explanation: Afterwards we evaluate the true objective function and append it to our initial observations.
End of explanation
emukit_model.set_data(X, Y)
mu_plot, var_plot = emukit_model.predict(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(emukit_model.X, emukit_model.Y, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: After updating the model, you can see that the uncertainty about the true objective function in this region decreases and our model becomes more certain.
End of explanation
from emukit.examples.gp_bayesian_optimization.single_objective_bayesian_optimization import GPBayesianOptimization
bo = GPBayesianOptimization(variables_list=[ContinuousParameter('x1', 0, 1)],
X=X_init, Y=Y_init)
bo.run_optimization(target_function, 10)
mu_plot, var_plot = bo.model.predict(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(bo.loop_state.X, bo.loop_state.Y, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: 3. Emukit's Bayesian optimization interface
Of course in practice we don't want to implement all of these steps our self. Emukit provides a convenient and flexible interface to apply Bayesian optimization. Below we can see how to run Bayesian optimization on the exact same function for 10 iterations.
End of explanation |
15,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: Calculate Population Variance
Variance is a measurement of the spread of a data's distribution. The higher the variance, the more "spread out" the data points are. Variance, commonly denoted as $S^{2}$, is calculated like this
Step3: Calculate Population Standard Deviation
Standard deviation is just the square root of the variance. | Python Code:
# Import data
import math
Explanation: Title: Variance And Standard Deviation
Slug: variance_and_standard_deviation
Summary: Calculating Variance And Standard Deviation in Python.
Date: 2016-02-08 12:00
Category: Statistics
Tags: Basics
Authors: Chris Albon
Preliminary
End of explanation
# Create list of values
data = [3,2,3,4,2,3,5,2,2,33,3,5,2,2,5,6,62,2,2,3,6,6,2,23,3,2,3]
Explanation: Create Data
End of explanation
# Calculate n
n = len(data)
# Calculate the mean
mean = sum(data)/len(data)
# Create a list of all deviations from the mean
all_deviations_from_mean_squared = []
# For each observation in the data
for observation in data:
# Calculate the deviation from the mean
deviation_from_mean = (observation - mean)
# Square it
deviation_from_mean_squared = deviation_from_mean**2
# Add the result to our list
all_deviations_from_mean_squared.append(deviation_from_mean_squared)
# Sum all the squared deviations in our list
sum_of_deviations_from_mean_squared = sum(all_deviations_from_mean_squared)
# Divide by n
population_variance = sum_of_deviations_from_mean_squared/n
# Show variance
population_variance
Explanation: Calculate Population Variance
Variance is a measurement of the spread of a data's distribution. The higher the variance, the more "spread out" the data points are. Variance, commonly denoted as $S^{2}$, is calculated like this:
$$ \text{Population Variance} = S_n^{2} = \frac{1}{n}\sum_{i=1}^{n}(x_i-\bar{x})^{2}$$
$$ \text{Sample Variance} = S_{n-1}^{2} = \frac{1}{n-1}\sum_{i=1}^{n}(x_i-\bar{x})^{2}$$
Where $n$ is the number of observations, $\bar{x}$ is the mean of the observations, and $x_i-\bar{x}$ is an individual observation's from the mean of the data. Note that if we were estimating the variance of a population based on a sample from that population, we should use the second equation, replacing $n$ with $n-1$.
End of explanation
# Find the square root of the population variance
population_standard_deviation = math.sqrt(population_variance)
# Print the populaton standard deviation
population_standard_deviation
Explanation: Calculate Population Standard Deviation
Standard deviation is just the square root of the variance.
End of explanation |
15,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solver Interface
Each cobrapy solver must expose the following API. The solvers all will have their own distinct LP object types, but each can be manipulated by these functions. This API can be used directly when implementing algorithms efficiently on linear programs because it has 2 primary benefits
Step1: Attributes and functions
Each solver has some attributes
Step2: _SUPPORTS_MILP
The presence of this attribute tells cobrapy that the solver supports mixed-integer linear programming
Step3: solve
Model.optimize is a wrapper for each solver's solve function. It takes in a cobra model and returns a solution
Step4: create_problem
This creates the LP object for the solver.
Step5: solve_problem
Solve the LP object and return the solution status
Step6: format_solution
Extract a cobra.Solution object from a solved LP object
Step7: get_objective_value
Extract the objective value from a solved LP object
Step8: get_status
Get the solution status of a solved LP object
Step9: change_variable_objective
change the objective coefficient a reaction at a particular index. This does not change any of the other objectives which have already been set. This example will double and then revert the biomass coefficient.
Step10: change variable_bounds
change the lower and upper bounds of a reaction at a particular index. This example will set the lower bound of the biomass to an infeasible value, then revert it.
Step11: change_coefficient
Change a coefficient in the stoichiometric matrix. In this example, we will set the entry for ADP in the ATMP reaction to in infeasible value, then reset it.
Step12: set_parameter
Set a solver parameter. Each solver will have its own particular set of unique paramters. However, some have unified names. For example, all solvers should accept "tolerance_feasibility."
Step13: Example with FVA
Consider flux variability analysis (FVA), which requires maximizing and minimizing every reaction with the original biomass value fixed at its optimal value. If we used the cobra Model API in a naive implementation, we would do the following
Step14: Instead, we could use the solver API to do this more efficiently. This is roughly how cobrapy implementes FVA. It keeps uses the same LP object and repeatedly maximizes and minimizes it. This allows the solver to preserve the basis, and is much faster. The speed increase is even more noticeable the larger the model gets. | Python Code:
import cobra.test
model = cobra.test.create_test_model("textbook")
solver = cobra.solvers.cglpk
Explanation: Solver Interface
Each cobrapy solver must expose the following API. The solvers all will have their own distinct LP object types, but each can be manipulated by these functions. This API can be used directly when implementing algorithms efficiently on linear programs because it has 2 primary benefits:
Avoid the overhead of creating and destroying LP's for each operation
Many solver objects preserve the basis between subsequent LP's, making each subsequent LP solve faster
We will walk though the API with the cglpk solver, which links the cobrapy solver API with GLPK's C API.
End of explanation
solver.solver_name
model.optimize(solver="cglpk")
Explanation: Attributes and functions
Each solver has some attributes:
solver_name
The name of the solver. This is the name which will be used to select the solver in cobrapy functions.
End of explanation
solver._SUPPORTS_MILP
Explanation: _SUPPORTS_MILP
The presence of this attribute tells cobrapy that the solver supports mixed-integer linear programming
End of explanation
solver.solve(model)
Explanation: solve
Model.optimize is a wrapper for each solver's solve function. It takes in a cobra model and returns a solution
End of explanation
lp = solver.create_problem(model, objective_sense="maximize")
lp
Explanation: create_problem
This creates the LP object for the solver.
End of explanation
solver.solve_problem(lp)
Explanation: solve_problem
Solve the LP object and return the solution status
End of explanation
solver.format_solution(lp, model)
Explanation: format_solution
Extract a cobra.Solution object from a solved LP object
End of explanation
solver.get_objective_value(lp)
Explanation: get_objective_value
Extract the objective value from a solved LP object
End of explanation
solver.get_status(lp)
Explanation: get_status
Get the solution status of a solved LP object
End of explanation
model.reactions.index("Biomass_Ecoli_core")
solver.change_variable_objective(lp, 12, 2)
solver.solve_problem(lp)
solver.get_objective_value(lp)
solver.change_variable_objective(lp, 12, 1)
solver.solve_problem(lp)
solver.get_objective_value(lp)
Explanation: change_variable_objective
change the objective coefficient a reaction at a particular index. This does not change any of the other objectives which have already been set. This example will double and then revert the biomass coefficient.
End of explanation
solver.change_variable_bounds(lp, 12, 1000, 1000)
solver.solve_problem(lp)
solver.change_variable_bounds(lp, 12, 0, 1000)
solver.solve_problem(lp)
Explanation: change variable_bounds
change the lower and upper bounds of a reaction at a particular index. This example will set the lower bound of the biomass to an infeasible value, then revert it.
End of explanation
model.metabolites.index("atp_c")
model.reactions.index("ATPM")
solver.change_coefficient(lp, 16, 10, -10)
solver.solve_problem(lp)
solver.change_coefficient(lp, 16, 10, -1)
solver.solve_problem(lp)
Explanation: change_coefficient
Change a coefficient in the stoichiometric matrix. In this example, we will set the entry for ADP in the ATMP reaction to in infeasible value, then reset it.
End of explanation
solver.set_parameter(lp, "tolerance_feasibility", 1e-9)
solver.set_parameter(lp, "objective_sense", "minimize")
solver.solve_problem(lp)
solver.get_objective_value(lp)
solver.set_parameter(lp, "objective_sense", "maximize")
solver.solve_problem(lp)
solver.get_objective_value(lp)
Explanation: set_parameter
Set a solver parameter. Each solver will have its own particular set of unique paramters. However, some have unified names. For example, all solvers should accept "tolerance_feasibility."
End of explanation
%%time
# work on a copy of the model so the original is not changed
fva_model = model.copy()
# set the lower bound on the objective to be the optimal value
f = fva_model.optimize().f
for objective_reaction, coefficient in fva_model.objective.items():
objective_reaction.lower_bound = coefficient * f
# now maximize and minimze every reaction to find its bounds
fva_result = {}
for r in fva_model.reactions:
fva_model.change_objective(r)
fva_result[r.id] = {}
fva_result[r.id]["maximum"] = fva_model.optimize(objective_sense="maximize").f
fva_result[r.id]["minimum"] = fva_model.optimize(objective_sense="minimize").f
Explanation: Example with FVA
Consider flux variability analysis (FVA), which requires maximizing and minimizing every reaction with the original biomass value fixed at its optimal value. If we used the cobra Model API in a naive implementation, we would do the following:
End of explanation
%%time
# create the LP object
lp = solver.create_problem(model)
# set the lower bound on the objective to be the optimal value
solver.solve_problem(lp)
f = solver.get_objective_value(lp)
for objective_reaction, coefficient in model.objective.items():
objective_index = model.reactions.index(objective_reaction)
# old objective is no longer the objective
solver.change_variable_objective(lp, objective_index, 0.)
solver.change_variable_bounds(lp, objective_index, f * coefficient, objective_reaction.upper_bound)
# now maximize and minimze every reaction to find its bounds
fva_result = {}
for index, r in enumerate(model.reactions):
solver.change_variable_objective(lp, index, 1.)
fva_result[r.id] = {}
solver.solve_problem(lp, objective_sense="maximize")
fva_result[r.id]["maximum"] = solver.get_objective_value(lp)
solver.solve_problem(lp, objective_sense="minimize")
fva_result[r.id]["minimum"] = solver.get_objective_value(lp)
solver.change_variable_objective(lp, index, 0.)
Explanation: Instead, we could use the solver API to do this more efficiently. This is roughly how cobrapy implementes FVA. It keeps uses the same LP object and repeatedly maximizes and minimizes it. This allows the solver to preserve the basis, and is much faster. The speed increase is even more noticeable the larger the model gets.
End of explanation |
15,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic usage
The primary object in Bolt is the Bolt array. We can construct these arrays using familiar operators (like zeros and ones), or from an existing array, and manipulate them like ndarrays whether in local or distributed settings. This notebook highlights the core functionality, much of which hides the underlying implementation (by design!); see other tutorials for more about what's going on under the hood, and for more advanced usage.
Local array
The local Bolt array is just like a NumPy array, and we can construct it without any special arguments.
Step1: The local array is basically a wrapper for a NumPy array, so that we can write applications against the BoltArray and support either local or distributed settings regardless of which we're in. As such, it has the usual NumPy functionality.
Step2: The toarray method always returns the underlying array
Step3: Distributed array
To construct Bolt arrays backed by other engines, like Spark, we just add additional arguments to the constructor. For Spark, we add a SparkContext
Step4: We can also construct from an existing local array
Step5: Array operations
We can use many of the ndarray operations we're familiar with, including aggregations along axes
Step6: indexing with either slices or interger lists
Step7: and reshaping, squeezing, and transposing
Step8: Functional operators
The Bolt array also supports functional-style operations, like map, reduce, and filter. We can use map to apply functions in parallel
Step9: If we map over the 0th axis with the sum function, we are taking the sum of 2 arrays each 3x4
Step10: If we instead map over the 0 and 1st axis, we are taking the sum of 2x3 arrays each of size 4
Step11: And we can chain these functional operations alongside array operations | Python Code:
from bolt import ones
a = ones((2,3,4))
a.shape
Explanation: Basic usage
The primary object in Bolt is the Bolt array. We can construct these arrays using familiar operators (like zeros and ones), or from an existing array, and manipulate them like ndarrays whether in local or distributed settings. This notebook highlights the core functionality, much of which hides the underlying implementation (by design!); see other tutorials for more about what's going on under the hood, and for more advanced usage.
Local array
The local Bolt array is just like a NumPy array, and we can construct it without any special arguments.
End of explanation
a.sum()
a.transpose(2,1,0).shape
Explanation: The local array is basically a wrapper for a NumPy array, so that we can write applications against the BoltArray and support either local or distributed settings regardless of which we're in. As such, it has the usual NumPy functionality.
End of explanation
a.sum(axis=0).toarray()
Explanation: The toarray method always returns the underlying array
End of explanation
b = ones((2, 3, 4), sc)
b.shape
Explanation: Distributed array
To construct Bolt arrays backed by other engines, like Spark, we just add additional arguments to the constructor. For Spark, we add a SparkContext
End of explanation
from numpy import arange
x = arange(2*3*4).reshape(2, 3, 4)
from bolt import array
b = array(x, sc)
b.shape
Explanation: We can also construct from an existing local array
End of explanation
b.sum()
b.sum(axis=0).toarray()
b.max(axis=(0,1)).toarray()
Explanation: Array operations
We can use many of the ndarray operations we're familiar with, including aggregations along axes
End of explanation
b[:,:,0:2].shape
b[0,0:2,0:2].toarray()
b[[0,1],[0,1],[0,1]].toarray()
Explanation: indexing with either slices or interger lists
End of explanation
b.shape
b.reshape(2, 4, 3).shape
b[:,:,0:1].squeeze().shape
b.transpose(2, 1, 0).shape
Explanation: and reshaping, squeezing, and transposing
End of explanation
a = ones((2, 3, 4), sc)
a.map(lambda x: x * 2).toarray()
Explanation: Functional operators
The Bolt array also supports functional-style operations, like map, reduce, and filter. We can use map to apply functions in parallel
End of explanation
a.map(lambda x: x.sum(), axis=(0,)).toarray()
Explanation: If we map over the 0th axis with the sum function, we are taking the sum of 2 arrays each 3x4
End of explanation
a.map(lambda x: x.sum(), axis=(0,1)).toarray()
Explanation: If we instead map over the 0 and 1st axis, we are taking the sum of 2x3 arrays each of size 4
End of explanation
a.map(lambda x: x * 2, axis=(0,)).sum(axis=(0,1)).toarray()
Explanation: And we can chain these functional operations alongside array operations
End of explanation |
15,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manipulating Data in Python
Laila A. Wahedi
Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy
Follow along
Step1: Numbers
Step2: Lists
Declare with
Step3: Zero indexed
Step4: Lists are Mutable
Step5: Dictionaries
key value pairs indexed by key
Declared by {key
Step6: Add key value pairs later using indexing with brackets []
You can store data type you want
Step7: Sets
Like a list, but
Step8: Tuples
Like an ordered list, but can't be changed
Declare with ()
Step9: Our First Algorithm
Step10: Our First Algorithm
Step11: Adding Choices
<img src="if_algorithm.png">
Writing Instructions Python Can Understand
Step12: Compare values with
Step13: Booleans and Indicators
booleans are indicators of True or False
In Python3, True = 1, and False = 0
Step14: None is a null value
None evaluates to false in an if statement.
If statements
Step15: Play around yourself
Step16: Repetition
<img src="loops.png">
While loops
Do something in block of code until condition met
Make sure you change the condition in every loop, or it will go forever
Step17: For loops
repeat block of code for
Step18: Put them together
Step19: Learn More
Step20: Pandas Data Frames
Using Documentation
Pandas website
Stack Overflow
Copy errors into google
Look up syntax differences with R
Load Data from a comma separated file
Start by googling it
Step21: Look at the data
Step22: Rename things and adjust values
Step23: Set a useful index
Step24: Slicing
Get specific values from the dataframe.
Pandas has several slice operators.
iloc can be used to index the row by ordered integer. i.e. first row is 0, second row is 1, etc. Use this option sparingly. Better practice to use the index you have created.
loc uses the named index and columns.
Index using [row, columns]
For multiple columns, put your column names in a list
Use
Step25: Look at the data
Step26: Slicing Using Conditionals
Put conditionals in parentheses
Stack multiple conditionals using
Step27: Handling Missing Values
First lets make some
Step28: Handling Missing Values
We could index by them
Step29: Handling Missing Values
We could fill them
Step30: Making New Columns
Assign values to a new column based on other columns
Step31: Reindexing
Step32: Set a multi-index
Step33: Using the new index, make a new dataframe
Note the new slicing operator for multi-index
Step34: Warning
Step35: What happened?
copied_df changed when little_df changed.
Let's fix that
Step36: Saving
Unlike R or Stata, can't just save your workspace
Save as a csv now that we have the data we want
Pickle a variable to recreate it without having to reset indexes, etc.
Step37: Next time
Step38: Joins
Same as SQL, inner and outer
Step39: Concatenate
Stack dataframes on top of one another
Stack dataframes beside one another | Python Code:
my_string = 'Hello World'
print(my_string)
Explanation: Manipulating Data in Python
Laila A. Wahedi
Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy
Follow along: Wahedi.us, Current Presentation
Installing packages:
On a Mac:
Open terminal
On Windows:
Type cmd into the start menu
What is this?
Like an explorer window. The text at the start of the line tells you where you are.
To see what's there, type:
On Mac: ls (That's an L, not an i)
On Windows: dir
Installing Packages:
Install Packages
Install some packages so they are done by the time we need them.
Type:
pip install pandas
When that's done, type:
pip install matplotlib
pip install pickle
pip install statsmodels
Note: if you installed the Continuum Python distribution, you may already have some of these packages installed.
What just happened?
Pip is a download manager, it automatically looks online to find the package and installs it on your computer
Similar to the CRAN Mirror
Bonus: if you followed the link I sent to install Anaconda, you have the conda download manager too.
conda install works similarly to pip, and also works on many R packages when used with Jupyter notebooks.
Opening Jupyter
Open up another terminal/command prompt while your packages download
Navigate to the directory you want your script to be in
Type ls to see what folders are in the current folder
Type cd .. to go one folder up
Type cd folder_name to go to a sub-folder
Type mkdir folder_name to make a folder
Type:
jupyter notebook
What happened?
A browser window popped up
Address something along the lines of localhost:8888/tree
Address is listed in the terminal/command prompt
Go to this address from any browser to open your tree
Your Jupyter Tree
You should be in the folder you navigated to in terminal/cmd
You can make folders here too
Top right, select new notebook, python3
Open this notebook:
Move the file into the directory for your Jupyter Tree
Select the file from teh Jupyter Tree
A new tab should open in your browser.
Add new cells to take notes as you go along.
Change the type of note cells to Markdown
Some Python Basics
What is code?
Set of clear instructions to the computer
Start with basic set of building blocks
Put those building blocks together
Goal of programming:
Break your task into building blocks
Put building blocks together into an <b> algorithm </b>
Our tasks:
Move and manipulate data using these building blocks.
Building Blocks
<img src="building_blocks.png">
* Ball and urn example
Our First Algorithm
<img src="algorithm1.png">
Basic Data Structures
Strings
Text variables
Declare with either '', or ""
End of explanation
my_int = 2
my_float = 2.2
new_float = my_int+my_float
print(new_float)
type(new_float)
Explanation: Numbers:
ints: integers
floats: numbers with decimals
Combine them to get a new float
End of explanation
my_list = [0,1,2,3,4]
Explanation: Lists
Declare with: []
Ordered list of objects
End of explanation
print(my_list[1])
Explanation: Zero indexed
End of explanation
my_list[2]='hello'
print(my_list)
Explanation: Lists are Mutable:
End of explanation
my_dictionary = {'apple':4,
'pear':'yum'}
print(my_dictionary['apple'])
Explanation: Dictionaries
key value pairs indexed by key
Declared by {key:value}
End of explanation
my_dictionary['numbers'] = my_list
print(my_dictionary['numbers'])
Explanation: Add key value pairs later using indexing with brackets []
You can store data type you want
End of explanation
my_set = {'thing1','thing2','cat in hat','thing1', 4,4}
print(my_set)
Explanation: Sets
Like a list, but:
Contains unique values
Unordered
Declare with {}
End of explanation
my_tuple = (1,3,2)
print(my_tuple)
Explanation: Tuples
Like an ordered list, but can't be changed
Declare with ()
End of explanation
# Declare Data
my_data = 'hello '
my_other_data = 'world'
#Manipulate it
manipulated_data = my_data+my_other_data
#Output it:
print(manipulated_data)
Explanation: Our First Algorithm: Strings
<img src="algorithm1.png">
End of explanation
# Declare Data
my_data = 1
my_other_data = 5
#Manipulate it
manipulated_data = 1/5
#Output it:
print(manipulated_data)
Explanation: Our First Algorithm: Numbers
<img src="algorithm1.png">
End of explanation
my_variable = 5
print(my_variable)
print(my_variable == 5)
Explanation: Adding Choices
<img src="if_algorithm.png">
Writing Instructions Python Can Understand:
Whitespace matters
tabs indicate a block of code within an if statement or loop
: marks the start of the block
\ for a linebreak mid-line
Line breaks allowed inside (), [], {}
If statements
= to assign
Like <- in R
== to evaluate
End of explanation
print(my_variable > 6)
print(my_variable in [1,4,7])
Explanation: Compare values with:
<, <=, >, >=, ==, !=, in, not in,
End of explanation
True + True
Explanation: Booleans and Indicators
booleans are indicators of True or False
In Python3, True = 1, and False = 0
End of explanation
my_bool = 'ice cream'
if my_bool == 'ice cream':
print('yay')
elif my_bool == 'cake':
print('woo!')
else:
print('Woe and great tragedy!')
Explanation: None is a null value
None evaluates to false in an if statement.
If statements:
Used to change behavior depending on conditionals
Declare the statement
Declare the conditional action within a code block, or indentation:
Declare alternative actions in else
Stack conditionals with elif
End of explanation
check = True
# check = False
# check = None
# check = 'monkey'
# check = 0
# check = 10
print('Check is:', check)
if check == 'monkey':
print('banana')
elif check:
print('yes')
else:
print('no')
if 1 not in [1,2,3]:
print('not not in')
if 1 in [1,2,3]:
print('in')
Explanation: Play around yourself:
End of explanation
n = 0
while n < 5:
print(n)
n= n+1
Explanation: Repetition
<img src="loops.png">
While loops
Do something in block of code until condition met
Make sure you change the condition in every loop, or it will go forever
End of explanation
print('use a range:')
for i in range(3):
print(i)
print('use a range slice:')
for i in range(3,6):
print(i)
print('iterate throubh a list:')
for i in my_list:
print(i)
Explanation: For loops
repeat block of code for:
a certain number of iterations
For every element in a list
range() gives you an iterator
End of explanation
my_list = [0,1,'cat',None,'frog',3]
animals = []
nums = []
for i in my_list:
if type(i)==str:
animals.append(i)
elif type(i)==int:
nums.append(i)
else:
pass
print(animals)
print(nums)
Explanation: Put them together:
Play around on your own
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# from ggplot import *
import pickle
import statsmodels.api as sm
Explanation: Learn More:
These are the basic building blocks for scripting. To learn more about how to put them together to greater effect:
Take an introductory computer science course online
Check out GU Women Coders.
Can't make it in person? Their website has lots of great resources.
http://guwecode.georgetown.domains/
Import packages
Like library(package) in R
Pandas is a dataframe data structure like dataframes in R
matplotlib is a plotting package similar to plotting in Matlab
you can view plots inline in pandas notebooks with the inline command
ggplot is a plotting package built on matplotlib like ggplot2 in R
Pickle lets you save your workspace
Statsmodels contains many of the statistical models you know and love
End of explanation
baad_covars = pd.read_csv('BAAD_1_Lethality_Data.tab',sep='\t')
Explanation: Pandas Data Frames
Using Documentation
Pandas website
Stack Overflow
Copy errors into google
Look up syntax differences with R
Load Data from a comma separated file
Start by googling it: http://lmgtfy.com/?q=pandas+load+csv
We will use the Big Allied and Dangerous Data from START
https://dataverse.harvard.edu/file.xhtml?fileId=2298519&version=RELEASED&version=.0
End of explanation
baad_covars.head()
Explanation: Look at the data
End of explanation
baad_covars.rename(columns = {'cowmastercountry':'country',
'masterccode':'ccode',
'mastertccode3606':'group_code',
'fatalities19982005':'fatalities'},
inplace = True)
baad_covars.replace({'country':{'United States of America':'US'}},
inplace = True)
print('Dimensions: ',baad_covars.shape)
baad_covars.head()
Explanation: Rename things and adjust values
End of explanation
#Set the index
baad_covars.set_index(['group_code'],inplace = True)
baad_covars.head()
Explanation: Set a useful index
End of explanation
baad_covars.loc[:, 'fatalities'].head()
Explanation: Slicing
Get specific values from the dataframe.
Pandas has several slice operators.
iloc can be used to index the row by ordered integer. i.e. first row is 0, second row is 1, etc. Use this option sparingly. Better practice to use the index you have created.
loc uses the named index and columns.
Index using [row, columns]
For multiple columns, put your column names in a list
Use : for all values
Notice that the output keeps the index names.
End of explanation
baad_covars.loc[:,['OrgAge']].plot.density()
print(baad_covars.loc[:,['OrgAge']].mean())
baad_covars.loc[:,['fatalities']].plot.hist(bins=20)
Explanation: Look at the data
End of explanation
baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1),
['group','country']].head()
Explanation: Slicing Using Conditionals
Put conditionals in parentheses
Stack multiple conditionals using:
& when both conditions must always apply
| when at least one condition must apply
End of explanation
baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1),
['terrStrong']] = None
baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1),
['terrStrong']].head()
Explanation: Handling Missing Values
First lets make some:
End of explanation
baad_covars.loc[baad_covars.terrStrong.isnull(),'terrStrong'].head()
Explanation: Handling Missing Values
We could index by them
End of explanation
baad_covars['terrStrong'] = baad_covars.terrStrong.fillna(-77)
baad_covars.terrStrong.head()
Explanation: Handling Missing Values
We could fill them:
End of explanation
baad_covars['big'] = 0
baad_covars.loc[(baad_covars.fatalities>1) |
(baad_covars.degree>=1),
'big']=1
baad_covars.big.head()
Explanation: Making New Columns
Assign values to a new column based on other columns:
End of explanation
baad_covars.reset_index(inplace=True)
baad_covars.head()
Explanation: Reindexing: Pop the index out without losing it
End of explanation
baad_covars.set_index(['group','country'],inplace = True)
baad_covars.head()
Explanation: Set a multi-index
End of explanation
indonesia_grps = baad_covars.xs('Indonesia',level = 'country',drop_level=False)
indonesia_grps = indonesia_grps.loc[indonesia_grps.fatalities>=1,['degree','ContainRelig',
'ContainEthno','terrStrong',
'ordsize','OrgAge']]
indonesia_grps.head()
Explanation: Using the new index, make a new dataframe
Note the new slicing operator for multi-index
End of explanation
little_df = pd.DataFrame([1,2,3,4,5],columns = ['A'])
little_df['B']=[0,1,0,1,1]
copied_df = little_df
print('before:')
print(copied_df)
little_df.loc[little_df.A == 3,'B'] = 7
print('after')
copied_df
Explanation: Warning: Making copies
If you set a variable as equal to an object, Python creates a reference rather than copying the whole object. More efficient, unless you really want to make a copy
End of explanation
import copy
little_df = pd.DataFrame([1,2,3,4,5],columns = ['A'])
little_df['B']=[0,1,0,1,1]
copied_df = little_df.copy()
print('before:')
print(copied_df)
little_df.loc[little_df.A == 3,'B'] = 7
print('after')
copied_df
Explanation: What happened?
copied_df changed when little_df changed.
Let's fix that: import "copy"
End of explanation
indonesia_grps.to_csv('indonesia.csv')
pickle.dump(indonesia_grps, open('indonesia.p','wb'))
indonesia_grps = pickle.load(open('indonesia.p','rb'))
Explanation: Saving
Unlike R or Stata, can't just save your workspace
Save as a csv now that we have the data we want
Pickle a variable to recreate it without having to reset indexes, etc.
End of explanation
C = pd.DataFrame(['apple','orange','grape','pear','banana'],
columns = ['C'],
index = [2,4,3,0,1])
little_df['C'] = C
little_df
Explanation: Next time:
Scrape Data from the internet
Clean the data
Merge that data into our data
Run a basic stats and ML model
Until then, here's some reference code on merging your data sets
Merging and Concatenating
Merges automatically if shared index
End of explanation
C = pd.DataFrame(['apple','orange','grape','apple'],
columns = ['C'],
index = [2,4,3,'a'])
C['cuts']=['slices','wedges','whole','spirals']
print('C:')
print(C)
print('Inner: Intersection')
print(little_df.merge(right=C,
how='inner',
on=None,
left_index = True,
right_index =True))
print('Outer: Keep all rows')
print(little_df.merge(right=C,
how='outer',
on=None,
left_index = True,
right_index =True))
print('Left: Keep little_df')
print(little_df.merge(right=C,
how='left',
on=None,
left_index = True,
right_index =True))
print('Right: Keep C')
print(little_df.merge(right=C,
how='right',
on=None,
left_index = True,
right_index =True))
print('Outer, merging on column instead of index')
print(little_df.merge(right=C,
how='outer',
on='C',
left_index = True,
right_index =True))
Explanation: Joins
Same as SQL, inner and outer
End of explanation
add_df = pd.DataFrame({'A':[6],'B':[7],'C':'peach'},index= ['p'])
little_df = pd.concat([little_df,add_df])
little_df
Explanation: Concatenate
Stack dataframes on top of one another
Stack dataframes beside one another
End of explanation |
15,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolution speed tests
This Notebook compares the convolution speeds of Eniric to PyAstronomy.
Enirics rotational convolution is faster than PyAstronomy's "slow" convolution but it is significantly slower than PyAstronomy's "fast" convoluions that use a fixed kernel (valid only for a small wavelength range) and require a uniform wavelength step.
Eneric allows for a variable step wavelength array, with a unique kernel for each pixel (hence the longer time).
Recalling a cached result is faster than PyAstronomy's convolutions.
Requires PyAstronomy
pip install PyAstronomy
Step1: Load data
Select test spectra, flux1 is a M0 spectra, flux2 is a M9 spectra.
Step2: Timing Convolutions
Times vary due to system hardware performance.
Rotational convolution
Step3: The rotational convolution in eniric is ~10x faster than
the precise version in Pyastronomy and does not require equal wavelength steps.
It is ~1000x slower then the fast rotational convolution that uses a fixed kernel and only valid for short regions.
Resolution convolution
Step4: Resolution convolution is around 500x slower
although it can handle uneven wavelenght spacing and has variable kernal.
Compare the results of convolution
Eniric gives a comparible rotational convolution to PyAstronomy's slow version. The PyAstronomy Fast convolution gives different results, which are maximum at the edges.
PyAstronomy also has edge effects which are ignored using [10
Step5: PyAstronomy slow and eniric are identical (within 1e-13%) (except for edge effects).
PyAstronomy Fast and eniric are different by up to 1.5% | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import PyAstronomy.pyasl as pyasl
import eniric
from eniric import config
from eniric.broaden import rotational_convolution, resolution_convolution
from eniric.utilities import band_limits, load_aces_spectrum, wav_selector
from scripts.phoenix_precision import convolve_and_resample
# config.cache["location"] = None # Disable caching for these tests
config.cache["location"] = ".joblib" # Enable caching
Explanation: Convolution speed tests
This Notebook compares the convolution speeds of Eniric to PyAstronomy.
Enirics rotational convolution is faster than PyAstronomy's "slow" convolution but it is significantly slower than PyAstronomy's "fast" convoluions that use a fixed kernel (valid only for a small wavelength range) and require a uniform wavelength step.
Eneric allows for a variable step wavelength array, with a unique kernel for each pixel (hence the longer time).
Recalling a cached result is faster than PyAstronomy's convolutions.
Requires PyAstronomy
pip install PyAstronomy
End of explanation
wav1, flux1 = load_aces_spectrum([3900, 4.5, 0.0, 0])
# wav2, flux2 = load_aces_spectrum([2600, 4.5, 0.0, 0])
wav1, flux1 = wav_selector(wav1, flux1, *band_limits("K"))
# wav2, flux2 = wav_selector(wav2, flux2, *band_limits("K"))
# PyAstronomy requires even spaced waelength (eniric does not)
wav = np.linspace(wav1[0], wav1[-1], len(wav1))
flux1 = np.interp(wav, wav1, flux1)
#flux2 = np.interp(wav, wav2, flux2)
# Convolution settings
epsilon = 0.6
vsini = 10.0
R = 40000
Explanation: Load data
Select test spectra, flux1 is a M0 spectra, flux2 is a M9 spectra.
End of explanation
%%time
rot_fast = pyasl.fastRotBroad(wav, flux1, epsilon, vsini)
## Wall time: 15.2 ms
%%time
rot_slow = pyasl.rotBroad(wav, flux1, epsilon, vsini)
## Wall time: 36 s
# Convolution settings
epsilon = 0.6
vsini = 10.0
R = 40000
%%time
# After caching
eniric_rot = rotational_convolution(wav, wav, flux1, vsini, epsilon=epsilon)
## Wall time: 4.2 ms
Explanation: Timing Convolutions
Times vary due to system hardware performance.
Rotational convolution
End of explanation
%%time
res_fast = pyasl.instrBroadGaussFast(wav, flux1, R, maxsig=5)
## Wall time: 19.2 ms
%%time
# Before caching
eniric_res = resolution_convolution(
wavelength=wav,
extended_wav=wav,
extended_flux=flux1,
R=R,
fwhm_lim=5,
num_procs=4,
normalize=True,
)
## Wall time: 3.07 s
%%time
# Same calculation with cached result.
eniric_res = resolution_convolution(
wavelength=wav,
extended_wav=wav,
extended_flux=flux1,
R=R,
fwhm_lim=5,
normalize=True,
)
## Wall time: 8.9 ms
Explanation: The rotational convolution in eniric is ~10x faster than
the precise version in Pyastronomy and does not require equal wavelength steps.
It is ~1000x slower then the fast rotational convolution that uses a fixed kernel and only valid for short regions.
Resolution convolution
End of explanation
plt.plot(wav, flux1, label="Original Flux")
plt.plot(wav[100:-100], eniric_res[100:-100], "-.", label="Eniric")
plt.plot(wav[100:-100], res_fast[100:-100], "--", label="PyAstronomy Fast")
plt.xlim([2.116, 2.118])
plt.xlabel("wavelength")
plt.title("Resolution convolution R={}".format(R))
plt.legend()
plt.show()
plt.plot(wav, flux1, label="Original")
plt.plot(wav, rot_fast, ":", label="PyAstronomy Fast")
plt.plot(wav, rot_slow, "--", label="PyAstronomy Slow")
plt.plot(wav, eniric_rot, "-.", label="Eniric")
plt.xlabel("Wavelength")
plt.title("Rotational Convolution vsini={}".format(vsini))
plt.xlim((2.116, 2.118))
plt.legend()
plt.show()
plt.plot(
wav[100:-100],
(eniric_rot[100:-100] - rot_fast[100:-100]) / eniric_rot[100:-100],
label="Eniric - PyA Fast",
)
plt.plot(
wav[100:-100],
(eniric_rot[100:-100] - rot_slow[100:-100]) / eniric_rot[100:-100],
"--",
label="Eniric - PyA Slow",
)
plt.xlabel("Wavelength")
plt.ylabel("Fractional difference")
plt.title("Rotational Convolution Differenes")
# plt.xlim((2.3, 2.31))
plt.legend()
plt.show()
plt.plot(
wav[50:-50],
(eniric_rot[50:-50] - rot_slow[50:-50]) / eniric_rot[50:-50],
"--",
label="Eniric - PyA Slow",
)
plt.xlabel("Wavelength")
plt.ylabel("Fractional difference")
plt.title("Rotational Convolution Differenes")
plt.legend()
plt.show()
assert np.allclose(eniric_rot[50:-50], rot_slow[50:-50])
Explanation: Resolution convolution is around 500x slower
although it can handle uneven wavelenght spacing and has variable kernal.
Compare the results of convolution
Eniric gives a comparible rotational convolution to PyAstronomy's slow version. The PyAstronomy Fast convolution gives different results, which are maximum at the edges.
PyAstronomy also has edge effects which are ignored using [10:-10] slicing.
End of explanation
plt.plot(
wav[100:-100],
(eniric_res[100:-100] - res_fast[100:-100]) / eniric_res[100:-100],
label="(Eniric-PyA Fast)/Eniric",
)
plt.xlabel("Wavelength")
plt.ylabel("Fractional difference")
plt.title("Resolution Convolution Differenes, R={}".format(R))
# plt.xlim((2.3, 2.31))
plt.legend()
plt.show()
Explanation: PyAstronomy slow and eniric are identical (within 1e-13%) (except for edge effects).
PyAstronomy Fast and eniric are different by up to 1.5%
End of explanation |
15,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Microscopic 3-Temperature-Model
Here we adapt the NTM from the last example to allow for calculations of the magnetization within the microscopic 3-temperature-model as proposed by
Step1: Structure
to the structure-example for more details.
Step2: Initialize Heat and the Excitation
Step3: Calculate Heat Diffusion
The init_temp sets the magnetization in the 3rd subsystem to 1 for CoNi and 0 for Si. | Python Code:
import udkm1Dsim as ud
u = ud.u # import the pint unit registry from udkm1Dsim
import scipy.constants as constants
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
u.setup_matplotlib() # use matplotlib with pint units
Explanation: Microscopic 3-Temperature-Model
Here we adapt the NTM from the last example to allow for calculations of the magnetization within the microscopic 3-temperature-model as proposed by:
Koopmans, B., Malinowski, G., Dalla Longa, F. et al.
Explaining the paradoxical diversity of ultrafast laser-induced demagnetization.
Nature Mater 9, 259–265 (2010).
We need to solve the following coupled differential equations:
\begin{align}
c_e(T_e) \rho \frac{\partial T_e}{\partial t} & = & \frac{\partial}{\partial z}
\left( k_e(T_e) \frac{\partial T_e}{\partial z} \right)
- G_{ep}(T_e-T_p) + S(z, t) \
c_p(T_p) \rho \frac{\partial T_p}{\partial t} & = & \frac{\partial}{\partial z}
\left( k_p(T_p) \frac{\partial T_p}{\partial z} \right)
+ G_{ep}(T_e-T_p) \
\frac{\partial m}{\partial t} & = & R m \frac{T_p}{T_C}
\left( 1 - m \coth\left( \frac{m T_C}{T_e} \right) \right)
\end{align}
We treat the temperature of the 3rd subsystem as magnetization $m$.
For that we have to set its heat_capacity to $1/\rho$ and thermal_conductivity to zero.
We put the complete right term of the last equation in the sub_system_coupling term for the 3rd subsystem.
Here, we need to rewrite the kotangens hyperbolicus in Python as
$$\coth(x) = 1 + \frac{2}{e^{2x} - 1}$$
The values of the used parameters are not experimentally verified.
Setup
Do all necessary imports and settings.
End of explanation
Co = ud.Atom('Co')
Ni = ud.Atom('Ni')
CoNi = ud.AtomMixed('CoNi')
CoNi.add_atom(Co, 0.5)
CoNi.add_atom(Ni, 0.5)
Si = ud.Atom('Si')
prop_CoNi = {}
prop_CoNi['heat_capacity'] = ['0.1*T',
532*u.J/u.kg/u.K,
1/7000
]
prop_CoNi['therm_cond'] = [20*u.W/(u.m*u.K),
80*u.W/(u.m*u.K),
0]
R = 25.3/1e-12
Tc = 1388
g = 4.0e18
prop_CoNi['sub_system_coupling'] = ['-{:f}*(T_0-T_1)'.format(g),
'{:f}*(T_0-T_1)'.format(g),
'{0:f}*T_2*T_1/{1:f}*(1-T_2* (1 + 2/(exp(2*T_2*{1:f}/T_0) - 1) ))'.format(R, Tc)
]
prop_CoNi['lin_therm_exp'] = [0, 11.8e-6, 0]
prop_CoNi['sound_vel'] = 4.910*u.nm/u.ps
prop_CoNi['opt_ref_index'] = 2.9174+3.3545j
layer_CoNi = ud.AmorphousLayer('CoNi', 'CoNi amorphous', thickness=1*u.nm,
density=7000*u.kg/u.m**3, atom=CoNi, **prop_CoNi)
prop_Si = {}
prop_Si['heat_capacity'] = [100*u.J/u.kg/u.K, 603*u.J/u.kg/u.K, 1]
prop_Si['therm_cond'] = [0, 100*u.W/(u.m*u.K), 0]
prop_Si['sub_system_coupling'] = [0, 0, 0]
prop_Si['lin_therm_exp'] = [0, 2.6e-6, 0]
prop_Si['sound_vel'] = 8.433*u.nm/u.ps
prop_Si['opt_ref_index'] = 3.6941+0.0065435j
layer_Si = ud.AmorphousLayer('Si', "Si amorphous", thickness=1*u.nm,
density=2336*u.kg/u.m**3, atom=Si, **prop_Si)
S = ud.Structure('CoNi')
S.add_sub_structure(layer_CoNi, 20)
S.add_sub_structure(layer_Si, 50)
S.visualize()
Explanation: Structure
to the structure-example for more details.
End of explanation
h = ud.Heat(S, True)
h.save_data = False
h.disp_messages = True
h.excitation = {'fluence': [30]*u.mJ/u.cm**2,
'delay_pump': [0]*u.ps,
'pulse_width': [0.05]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
# temporal and spatial grid
delays = np.r_[-1:5:0.005]*u.ps
_, _, distances = S.get_distances_of_layers()
Explanation: Initialize Heat and the Excitation
End of explanation
# enable heat diffusion
h.heat_diffusion = True
# set the boundary conditions
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
# The resulting temperature profile is calculated in one line:
init_temp = np.ones([S.get_number_of_layers(), 3])
init_temp[:, 0] = 300
init_temp[:, 1] = 300
init_temp[20:, 2] = 0
temp_map, delta_temp = h.get_temp_map(delays, init_temp)
plt.figure(figsize=[6, 12])
plt.subplot(3, 1, 1)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 0], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Electrons')
plt.subplot(3, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 1], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Phonons')
plt.subplot(3, 1, 3)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 2], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Magnetization')
plt.tight_layout()
plt.show()
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['CoNi'], 0], 1), label='electrons')
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['CoNi'], 1], 1), label='phonons')
plt.ylabel('Temperature [K]')
plt.xlabel('Delay [ps]')
plt.legend()
plt.title('M3TM Koopmans et. al')
plt.subplot(2, 1, 2)
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['CoNi'], 2], 1), label='M')
plt.ylabel('Magnetization')
plt.xlabel('Delay [ps]')
plt.legend()
plt.show()
Explanation: Calculate Heat Diffusion
The init_temp sets the magnetization in the 3rd subsystem to 1 for CoNi and 0 for Si.
End of explanation |
15,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
API-REST
Step1: Obtención de un listado con todas las estaciones
Step2: Obtención de datos de una estación en concreto
Estación 111X situada en Santander
Step3: Limpiamos los datos
Se puede ver como los datos que devuelve la api vienen en formato string y con ',' como separador decimal, lo más cómodo será convertirlos
Step4: Haciendo una petición más grande
La api limita el número de días de los que puede dar información en una misma petición por lo que se queremos obtener datos de un periodo de tiempo largo, necesitamos hacer varias peticiones.
Escribir a mano la fecha inicial y final no es una opción, por lo que usaremos el módulo datetime para generar grupos de 30 días consecutivos
Step5: Generamos una lista con las fechas que vamos a utilizar para las peticiones
Step6: Uniendo los datos en un dataframe
Salvando y leyendo csvs
Step7: Representando los datos en Bokeh | Python Code:
import requests
# Cargamos la api key
api_key = open("../../apikey-aemet.txt").read().rstrip()
querystring = {"api_key": api_key}
Explanation: API-REST: AEMET OPEN DATA
En este notebook veremos otro ejemplo de uso de la api open data de AEMET. En este caso obtendremos parámetros medidos por una estación meteorológica.
End of explanation
# Obtenemos información de todas als estaciones disponibles
url = "https://opendata.aemet.es/opendata/api/valores/climatologicos/inventarioestaciones/todasestaciones"
# Realizamos la request
r = requests.get(url, params=querystring, verify=False)
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# Obtenemos el link del que descargaremos los datos
data_url = r.json()['datos']
r_data = requests.get(data_url, params=querystring, verify=False)
# Vemos el contenido
stations = r_data.json()
stations
Explanation: Obtención de un listado con todas las estaciones
End of explanation
url = ("https://opendata.aemet.es/opendata/api/valores/climatologicos/diarios/datos"
"/fechaini/2017-01-01T00:00:00UTC/fechafin/2017-01-31T23:59:59UTC/estacion/1111X")
r = requests.get(url, params=querystring, verify=False)
if r.status_code == requests.codes.OK:
data_url = r.json()['datos']
r_data = requests.get(data_url, params=querystring, verify=False)
raw_data = r_data.json()
raw_data
Explanation: Obtención de datos de una estación en concreto
Estación 111X situada en Santander
End of explanation
def parse_data(raw_data):
data = []
for d in raw_data:
d = dict(d) # Exto copia el parámetro
for param in ['prec', 'presMax', 'presMin', 'racha', 'sol', 'tmax', 'tmed', 'tmin', 'velmedia', 'altitud', 'dir']:
try:
d[param] = float(d[param].replace(',', '.'))
except:
d[param] = None
data.append(d)
return data
data = parse_data(raw_data)
data
Explanation: Limpiamos los datos
Se puede ver como los datos que devuelve la api vienen en formato string y con ',' como separador decimal, lo más cómodo será convertirlos:
End of explanation
import datetime
Explanation: Haciendo una petición más grande
La api limita el número de días de los que puede dar información en una misma petición por lo que se queremos obtener datos de un periodo de tiempo largo, necesitamos hacer varias peticiones.
Escribir a mano la fecha inicial y final no es una opción, por lo que usaremos el módulo datetime para generar grupos de 30 días consecutivos:
End of explanation
start_date = datetime.datetime(2015, 1, 1, 0, 0, 0)
final_date = datetime.datetime(2015, 4, 1, 0, 0, 0)
step = datetime.timedelta(days=30)
chunks = [start_date]
next_date = start_date + step
while next_date < final_date:
chunks.append(next_date)
next_date += step
chunks.append(final_date)
chunks
# Lo encapsulamos en una función
def generate_chunks(start_date, final_date, step):
chunks = [start_date]
next_date = start_date + step
while next_date < final_date:
chunks.append(next_date)
next_date += step
chunks.append(final_date)
return chunks
for ii in range(1, len(chunks)):
print(chunks[ii-1].strftime('%Y-%m-%dT%H:%M:%SUTC'),
" - ",
chunks[ii].strftime('%Y-%m-%dT%H:%M:%SUTC'))
import time
start_date = datetime.datetime(2012, 1, 4, 0, 0, 0)
final_date = datetime.datetime(2016, 12, 31, 23, 59, 59)
step = datetime.timedelta(days=30)
chunks = generate_chunks(start_date, final_date, step)
raw_data = []
station = '1111X'
for ii in range(1, len(chunks)):
print()
print(chunks[ii-1].strftime('%Y-%m-%dT%H:%M:%SUTC'),
" - ",
chunks[ii].strftime('%Y-%m-%dT%H:%M:%SUTC'))
url = ("https://opendata.aemet.es/opendata/api/valores/climatologicos/diarios/datos/"
"fechaini/{start}/fechafin/{end}/estacion/{station}".format(
start=chunks[ii-1].strftime('%Y-%m-%dT%H:%M:%SUTC'),
end=chunks[ii].strftime('%Y-%m-%dT%H:%M:%SUTC'),
station=station)
)
iterate = True
while iterate:
r = requests.get(url, params=querystring, verify=False)
# Si no me deja hacer la conexión, la repito
iterate = (r.status_code == requests.codes.too_many_requests)
print(r.json())
# Chequeo si la petición ha ido bien
if r.status_code == requests.codes.ok:
# Hago la request para obtener los datos
data_url = r.json()['datos']
r_data = requests.get(data_url, params=querystring, verify=False)
# INCONSISTENCIA DE LA API:
# Cuando no encuentra datos en el rango seleccionado, la API devuelve
# que el status code es 200 (todo ok) y devuelve un json con el error
# cuando encuentra, no hay atributo estado
try:
estado = r_data.json()['estado']
except:
estado = 200
# Si ha ido bien guardo los datos
if estado == requests.codes.ok:
# print(r_data.json())
raw_data.extend(r_data.json())
else:
print(r_data.json()['descripcion'])
else:
print(r.json()['descripcion'])
time.sleep(60/45)
data = parse_data(raw_data)
data[:2]
Explanation: Generamos una lista con las fechas que vamos a utilizar para las peticiones
End of explanation
import pandas as pd
pd.DataFrame(data[100], index=[0])
dfs = []
for ii in range(len(data)):
dfs.append(pd.DataFrame(data[ii], index=[ii]))
df = pd.concat(dfs)
df
df.to_csv('../data/weather_data.csv', index=None)
data_df = pd.read_csv("../data/weather_data.csv", parse_dates=['fecha'])
data_df.head()
%matplotlib inline
ax = data_df.plot(x='fecha', y=['tmax', 'tmed', 'tmin'], figsize=(12, 6))
Explanation: Uniendo los datos en un dataframe
Salvando y leyendo csvs
End of explanation
from bokeh.io import output_notebook, show
from bokeh.charts import Line
output_notebook()
show(Line(data_df[['presMin', 'presMax']]))
Explanation: Representando los datos en Bokeh
End of explanation |
15,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell
Step1: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
from IPython.display import Image
assert True # leave this to grade the import statements
Explanation: Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell:
End of explanation
Image(url='http://i.imgur.com/h3YTC.jpg',embed=True,width=600,height=600)
assert True # leave this to grade the image display
Explanation: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
End of explanation
%%html
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge(e)</th>
<th>Mass(MeV/c^2</th>
<tr>
<td>up</td>
<td>u</td>
<td>$\bar{u}$</td>
<td>+2/3</td>
<td>1.5-3.3</td>
<tr>
<td>down</td>
<td>d</td>
<td>$\bar{d}$</td>
<td>-1/3</td>
<td>3.5-6.0</td>
<tr>
<td>charm</td>
<td>c</td>
<td>$\bar{c}$</td>
<td>+2/3</td>
<td>1,160-1,340</td>
<tr>
<td>strange</td>
<td>s</td>
<td>$\bar{s}$</td>
<td>-1/3</td>
<td>70-130</td>
<tr>
<td>top</td>
<td>t</td>
<td>$\bar{t}$</td>
<td>+2/3</td>
<td>169,100-173,300</td>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\bar{b}$</td>
<td>-1/3</td>
<td>4,130-4,370</td>
</table>
assert True # leave this here to grade the quark table
Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
End of explanation |
15,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$
\mathcal{L}(r, a) = \log \mathcal{N}(y_r; W_r a, q) \prod_{i\neq r} \mathcal{N}(y_i; W_i a, s)
$$
$$
\log\mathcal{N}(y_r; W_r a, q) = -\frac{1}{2}\log 2\pi q - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2
$$
$$
\log\mathcal{N}(y_i; W_i a, s) = -\frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{s} (y_i - W_i a)^2
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2 + \sum_{i\neq r} -\frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{s} (y_i - W_i a)^2
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \sum_{i\neq r} \frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2 - \frac{1}{2} \frac{1}{s} \sum_{i\neq r} (y_i - W_i a)^2
$$
$$
D_r = \diag(s,s,\dots, q, \dots, s)^{-1}
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \frac{N-1}{2}\log 2\pi s - \frac{1}{2} (y - Wa)^\top D_r (y - Wa)
$$
$$
\mathcal{L}(r, a) =^+ - \frac{1}{2} (y - Wa)^\top D_r (y - Wa)
$$
\begin{eqnarray}
\mathcal{L}(r, a) =^+ - \frac{1}{2} y D_r y^\top + y^\top D_r W a - \frac{1}{2} a^\top W^\top D_r Wa
\end{eqnarray}
\begin{eqnarray}
\frac{\partial}{\partial a}\mathcal{L}(r, a) & = & W^\top D_r y - W^\top D_r W a = 0 \
W^\top D_r y & = & W^\top D_r W a = 0 \
(W^\top D_r W)^{-1} W^\top D_r y & = & a_r^* \
\end{eqnarray}
To use standard Least Square solver, we substitute
\begin{eqnarray}
V_r^\top \equiv W^\top D_r^{1/2} \
V_r \equiv D_r^{1/2} W \
\end{eqnarray}
\begin{eqnarray}
(V_r^\top V_r)^{-1} V_r^\top D_r^{1/2} y & = & a_r^*
\end{eqnarray}
In Matlab/Octave this is least square with
\begin{eqnarray}
a_r^* = V_r \backslash D_r^{1/2} y
\end{eqnarray}
Step1: Todo
Step2: SUppose we are given a data set $(y_i, x_i)$ for $i=1\dots N$
Assume we have a basis regression model (for example a polynomial basis where $f_k(x) = x^k$) and wish to fit
$y_i = \sum_k A_{ik} w_k + \epsilon_i$
for all $i = 1 \dots N$ where
$
A_{ik} = f_k(x_i)
$
Assume the prior
$
w \sim \mathcal{N}(w; 0, P)
$
Derive an expression for $p(y_{\text{new}}| x_{\text{new}}, y_{1 | Python Code:
import scipy.linalg as la
LL = np.zeros(N)
for rr in range(N):
ss = s*np.ones(N)
ss[rr] = q
D_r = np.diag(1/ss)
V_r = np.dot(np.sqrt(D_r), W)
b = y/np.sqrt(ss)
a_r,re,ra, cond = la.lstsq(V_r, b)
e = (y-np.dot(W, a_r))/np.sqrt(ss)
LL[rr] = -0.5*np.dot(e.T, e)
print(LL[rr])
#plt.plot(x, y, 'o')
#plt.plot(x, np.dot(W, a_r),'-')
#plt.plot(e)
plt.plot(LL)
plt.show()
Explanation: $$
\mathcal{L}(r, a) = \log \mathcal{N}(y_r; W_r a, q) \prod_{i\neq r} \mathcal{N}(y_i; W_i a, s)
$$
$$
\log\mathcal{N}(y_r; W_r a, q) = -\frac{1}{2}\log 2\pi q - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2
$$
$$
\log\mathcal{N}(y_i; W_i a, s) = -\frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{s} (y_i - W_i a)^2
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2 + \sum_{i\neq r} -\frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{s} (y_i - W_i a)^2
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \sum_{i\neq r} \frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2 - \frac{1}{2} \frac{1}{s} \sum_{i\neq r} (y_i - W_i a)^2
$$
$$
D_r = \diag(s,s,\dots, q, \dots, s)^{-1}
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \frac{N-1}{2}\log 2\pi s - \frac{1}{2} (y - Wa)^\top D_r (y - Wa)
$$
$$
\mathcal{L}(r, a) =^+ - \frac{1}{2} (y - Wa)^\top D_r (y - Wa)
$$
\begin{eqnarray}
\mathcal{L}(r, a) =^+ - \frac{1}{2} y D_r y^\top + y^\top D_r W a - \frac{1}{2} a^\top W^\top D_r Wa
\end{eqnarray}
\begin{eqnarray}
\frac{\partial}{\partial a}\mathcal{L}(r, a) & = & W^\top D_r y - W^\top D_r W a = 0 \
W^\top D_r y & = & W^\top D_r W a = 0 \
(W^\top D_r W)^{-1} W^\top D_r y & = & a_r^* \
\end{eqnarray}
To use standard Least Square solver, we substitute
\begin{eqnarray}
V_r^\top \equiv W^\top D_r^{1/2} \
V_r \equiv D_r^{1/2} W \
\end{eqnarray}
\begin{eqnarray}
(V_r^\top V_r)^{-1} V_r^\top D_r^{1/2} y & = & a_r^*
\end{eqnarray}
In Matlab/Octave this is least square with
\begin{eqnarray}
a_r^* = V_r \backslash D_r^{1/2} y
\end{eqnarray}
End of explanation
import numpy as np
import scipy as sc
import scipy.linalg as la
def cond_Gauss(Sigma, mu, idx1, idx2, x2):
Sigma11 = Sigma[idx1, idx1].reshape((len(idx1),len(idx1)))
Sigma12 = Sigma[idx1, idx2].reshape((len(idx1),len(idx2)))
Sigma22 = Sigma[idx2, idx2].reshape((len(idx2),len(idx2)))
# print(Sigma11)
# print(Sigma12)
# print(Sigma22)
mu1 = mu[idx1]
mu2 = mu[idx2]
G = np.dot(Sigma12, la.inv(Sigma22))
cond_Sig_1 = Sigma11 - np.dot(G, Sigma12.T)
cond_mu_1 = mu1 + np.dot(G, (x2-mu2))
return cond_mu_1, cond_Sig_1
mu = np.array([0,0])
#P = np.array([2])
#A = np.array([1])
idx1 = [0]
idx2 = [1]
x2 = 5
P = np.array(3).reshape((len(idx1), len(idx1)))
A = np.array(-1).reshape((len(idx2), len(idx1)))
rho = np.array(0)
#Sigma = np.array([[P, A*P],[P*A, A*P*A + rho ]])
I = np.eye(len(idx2))
Sigma = np.concatenate((np.concatenate((P,np.dot(P, A.T)),axis=1), np.concatenate((np.dot(A, P),np.dot(np.dot(A, P), A.T ) + rho*I ),axis=1)))
print(Sigma)
#print(mu)
cond_mu_1, cond_Sig_1 = cond_Gauss(Sigma, mu, idx1, idx2, x2)
print('E[x_1|x_2 = {}] = '.format(x2) , cond_mu_1)
print(cond_Sig_1)
Explanation: Todo: Evaluate the likelihood for all polynomial orders $K=1 \dots 8$
$p(x_1, x_2) = \mathcal{N}(\mu, \Sigma)$
$\mu = \left(\begin{array}{c} \mu_{1} \
\mu_{2} \end{array} \right)$
$\Sigma = \left(\begin{array}{cc} \Sigma_{11} & \Sigma_{12} \
\Sigma_{12}^\top & \Sigma_{22} \end{array} \right)$
$
p(x_1 | x_2) = \mathcal{N}(\mu_1 + \Sigma_{12} \Sigma_{22}^{-1} (x_2 -\mu_2), \Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1}\Sigma_{12}^\top)
$
End of explanation
# Use this code to generate a dataset
N = 30
K = 4
s = 0.1
q = 10*s
x = 2*np.random.randn(N)
e = np.sqrt(s) * np.random.randn(N)
# Create the vandermonde matrix
A = x.reshape((N,1))**np.arange(K).reshape(1,K)
w = np.array([0,-1,0.5,0])
y = np.dot(A, w) + e
plt.plot(x, y, 'o')
#plt.plot(e)
plt.show()
# Sig = [P, A.T; A A*A.T+rho*I]
N1 = 3
N2 = 7
P = np.random.randn(N1,N1)
A = np.random.randn(N2,N1)
#Sig11 = np.mat(P)
#Sig12 = np.mat(A.T)
#Sig21 = np.mat(A)
#Sig22 = Sig21*Sig12
Sig11 = np.mat(P)
Sig12 = np.mat(A.T)
Sig21 = np.mat(A)
Sig22 = Sig21*Sig12
print(Sig11.shape)
print(Sig12.shape)
print(Sig21.shape)
print(Sig22.shape)
W = np.bmat([[Sig11, Sig12],[Sig21, Sig22]])
Sig22.shape
3500*1.18*12
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
x = np.array([3.7, 2.3, 6.9, 7.5])
N = len(x)
lam = np.arange(0.05,10,0.01)
ll = -N*np.log(lam) - np.sum(x)/lam
plt.plot(lam, np.exp(ll))
plt.plot(np.mean(x), 0, 'ok')
plt.show()
xx = np.arange(0, 10, 0.01)
lam = 1000
p = 1/lam*np.exp(-xx/lam)
plt.plot(xx, p)
plt.plot(x, np.zeros((N)), 'ok')
plt.ylim((0,1))
plt.show()
1-(5./6.)**4
1-18/37
import numpy as np
N = 7
A = np.diag(np.ones(7))
ep = 0.5
a = 1
idx = [1, 2, 3, 4, 5, 6, 0]
A = ep*A + (1-ep)*A[:,idx]
C = np.array([[a, 1-a, 1-a, a, a, 1-a, 1-a],[1-a, a, a, 1-a, 1-a, a, a]])
p = np.ones((1,N))/N
print(A)
y = [1, 1, 0, 0, 0]
print(p)
p = C[y[0] , :]*p
print(p/np.sum(p, axis=1))
Explanation: SUppose we are given a data set $(y_i, x_i)$ for $i=1\dots N$
Assume we have a basis regression model (for example a polynomial basis where $f_k(x) = x^k$) and wish to fit
$y_i = \sum_k A_{ik} w_k + \epsilon_i$
for all $i = 1 \dots N$ where
$
A_{ik} = f_k(x_i)
$
Assume the prior
$
w \sim \mathcal{N}(w; 0, P)
$
Derive an expression for $p(y_{\text{new}}| x_{\text{new}}, y_{1:N}, x_{1:N})$ and implement a program that plots the mean and corresponding errorbars (from standard deviation of $p(y_{\text{new}}| x_{\text{new}}, y_{1:N}, x_{1:N})$) by choosing $y_{\text{new}}$ on a regular grid.
Note that $y_{\text{new}} = \sum f_k(x_{\text{new}}) w_k + \epsilon$
End of explanation |
15,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train Model on Distributed Cluster
IMPORTANT
Step1: Start Server "Task 0" (localhost
Step2: Start Server "Task 1" (localhost
Step3: Define Compute-Heavy TensorFlow Graph
Step4: Define Shape
Step5: Assign Devices Manually
All CPU Devices
Note the execution time.
Step6: CPU and GPU
Note the execution time.
Step7: All GPU Devices
Note the execution time.
Step8: Auto-assign Device by TensorFlow (Round-Robin by Default)
Note the execution time. | Python Code:
import tensorflow as tf
cluster = tf.train.ClusterSpec({"local": ["localhost:2222", "localhost:2223"]})
Explanation: Train Model on Distributed Cluster
IMPORTANT: You Must STOP All Kernels and Terminal Session
The GPU is wedged at this point. We need to set it free!!
Define ClusterSpec
End of explanation
server0 = tf.train.Server(cluster, job_name="local", task_index=0)
print(server0)
Explanation: Start Server "Task 0" (localhost:2222)
Note: If you see UnknownError: Could not start gRPC server, then you have already started the server. Please ignore this.
End of explanation
server1 = tf.train.Server(cluster, job_name="local", task_index=1)
print(server1)
Explanation: Start Server "Task 1" (localhost:2223)
Note: If you see UnknownError: Could not start gRPC server, then you have already started the server. Please ignore this.
End of explanation
import tensorflow as tf
n = 2
c1 = tf.Variable([])
c2 = tf.Variable([])
def matpow(M, n):
if n < 1:
return M
else:
return tf.matmul(M, matpow(M, n-1))
Explanation: Define Compute-Heavy TensorFlow Graph
End of explanation
shape=[2500, 2500]
Explanation: Define Shape
End of explanation
import datetime
with tf.device("/job:local/task:0/cpu:0"):
A = tf.random_normal(shape=shape)
c1 = matpow(A,n)
with tf.device("/job:local/task:1/cpu:0"):
B = tf.random_normal(shape=shape)
c2 = matpow(B,n)
with tf.Session("grpc://127.0.0.1:2222") as sess:
sum = c1 + c2
start_time = datetime.datetime.now()
print(sess.run(sum))
print("Execution time: "
+ str(datetime.datetime.now() - start_time))
Explanation: Assign Devices Manually
All CPU Devices
Note the execution time.
End of explanation
with tf.device("/job:local/task:0/gpu:0"):
A = tf.random_normal(shape=shape)
c1 = matpow(A,n)
with tf.device("/job:local/task:1/cpu:0"):
B = tf.random_normal(shape=shape)
c2 = matpow(B,n)
with tf.Session("grpc://127.0.0.1:2222") as sess:
sum = c1 + c2
start_time = datetime.datetime.now()
print(sess.run(sum))
print("Execution time: "
+ str(datetime.datetime.now() - start_time))
Explanation: CPU and GPU
Note the execution time.
End of explanation
with tf.device("/job:local/task:0/gpu:0"):
A = tf.random_normal(shape=shape)
c1 = matpow(A,n)
with tf.device("/job:local/task:1/gpu:0"):
B = tf.random_normal(shape=shape)
c2 = matpow(B,n)
with tf.Session("grpc://127.0.0.1:2222") as sess:
sum = c1 + c2
start_time = datetime.datetime.now()
print(sess.run(sum))
print("Execution time: "
+ str(datetime.datetime.now() - start_time))
Explanation: All GPU Devices
Note the execution time.
End of explanation
with tf.device(tf.train.replica_device_setter(worker_device="/job:worker/task:0",
cluster=cluster)):
A = tf.random_normal(shape=shape)
c1 = matpow(A,n)
with tf.device(tf.train.replica_device_setter(worker_device="/job:worker/task:1",
cluster=cluster)):
B = tf.random_normal(shape=shape)
c2 = matpow(B,n)
with tf.Session("grpc://127.0.0.1:2222") as sess:
sum = c1 + c2
start_time = datetime.datetime.now()
print(sess.run(sum))
print("Multi node computation time: "
+ str(datetime.datetime.now() - start_time))
Explanation: Auto-assign Device by TensorFlow (Round-Robin by Default)
Note the execution time.
End of explanation |
15,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LDA on vertebrates
Notes on the data
In this example the tree is contstrained
In this example we have to extract position,transition, and branch.
The total position are broken into N 'splits'
Each split contains 30,000 files (except the last file that has less)
So that means that there are 30,000 * N total positions
Browsing the data
Step1: a sentence of words is represented as the transitions for a given position
Step2: Simple test run with lda package
Step3: Recall that the Dirichlet Process (DP) (Ferguson, 1973) is essentially a distribution over distributions, where each draw from a DP is itself a distribution and importantly for clustering applications it serves as a natural prior that lets the number of clusters grow as the data grows. The DP has a base distribution parameter $\beta$ and a strength or concentration parameter $\alpha$.
$\alpha$ is a hyperprior for the DP over per-document topic distributions
$\beta$ is the hyperprior for the DP over per-topic word distributions
$\theta_{m}$ is the topic distribution for document $m$
$\phi_{k}$ is the word distribution for topic $k$
$z_{m,n}$ is the topic for the $n$th word in document $m$
$w_{m,n}$ is the specific word
The generative story for phylogenetics
We are still modeling topics. However, documents become sites and words become transitions. Transitions may defined in nucleotide, amino acid or codon space. Perhaps more logically though a document would be all of the sites for a given gene.
$\alpha$ is a hyperprior for the DP over per-site topic distributions
$\beta$ is the hyperprior for the DP over per-topic transition distributions
$\theta_{m}$ is the topic distribution for gene $m$
$\phi_{k}$ is the nucleotide transition distribution for topic $k$
$z_{m,n}$ is the topic for the $n$th nucleotide transition in gene $m$
$w_{m,n}$ is the specific transition
The generative process
Choose $\theta_{m} \sim \textrm{Dir}(\alpha)$, where $m \in {1,...M}$ and $\textrm{Dir}(\alpha)$ is the Dirichlet distribtion for $\alpha$
Choose $\phi_{k} \sim \textrm{Dir}(\beta)$, where $k \in {1,...K}$
For each of the transition positions ($m$,$n$), where $n \in {1,...N}$, and $m \in {1,...M}$
Choose a topic $z_{m,n} \sim \textrm{Multinomial}(\theta_{m})$
Choose a transition $w_{m,n} \sim \textrm{Multinomial}(\phi_{m,n})$
$\phi$ is a $K*V$ Markov matrix each row of which denotes the transition distribution of a topic.
The type of data to expect
Here I am borrowing from the package LDA, which uses a collapsed version of Gibbs sampling.
In this example.
Positions are documents
First 1000 positions
We consider in place transitions
20 topics
1500 MCMC iterations
7 words in a topic (transitions)
topics
Topic 0 | Python Code:
import os
import numpy as np
from vertebratesLib import *
split = "SPLIT1"
summaryTree,summarySpecies,splitPositions = get_split_data(split)
print summaryTree.shape
Explanation: LDA on vertebrates
Notes on the data
In this example the tree is contstrained
In this example we have to extract position,transition, and branch.
The total position are broken into N 'splits'
Each split contains 30,000 files (except the last file that has less)
So that means that there are 30,000 * N total positions
Browsing the data
End of explanation
def get_sentence(position,splitPositions,summary,ignore=False):
splitIndex = np.where(splitPositions==position)[0]
nonZero = np.where(summary[splitIndex,:] != 0)[1]
sentence = []
for nz in nonZero:
if ignore and TRANSITIONS[nz].count(TRANSITIONS[nz][0]) == 2:
continue
count = int(summary[splitIndex,nz][0])
sentence.extend([TRANSITIONS[nz]] * count)
return sentence
position = '8500'
sentence1 = get_sentence(position,splitPositions,summaryTree,ignore=False)
sentence2 = get_sentence(position,splitPositions,summaryTree,ignore=True)
print("with same AA transition")
print(sentence1)
print("without same AA transition")
print(sentence2)
Explanation: a sentence of words is represented as the transitions for a given position
End of explanation
import lda
## the data matrix are the sentences by vocabulary
vocab = TRANSITIONS
#inPlaceTransitions = []
#for t in TRANSITIONS:
Explanation: Simple test run with lda package
End of explanation
from IPython.display import Image
dataDir = None
for ddir in [os.path.join("..","data","herve-vertebrates"),\
os.path.join("/","media","ganda","mojo","phylogenetic-models","herve-vertebrates")]:
if os.path.isdir(ddir):
dataDir = ddir
split = "SPLIT1"
position = "0"
treeList = get_trees(split,position,dataDir)
countMatrix = np.zeros((len(treeList),len(TRANSITIONS)),)
t = 0
for t,pbTree in enumerate(treeList):
fixedTree,treeSummary = fix_tree(pbTree)
tlist = []
for item in treeSummary.itervalues():
tlist.extend(item['pairs'])
counts = transitions_to_counts(tlist)
countMatrix[t,:] = counts
figName1 = os.path.join("figures","lda-bplot-check.png")
profile_box_plot(countMatrix,figName1,figTitle='position - %s'%position)
Image(filename=figName1)
Explanation: Recall that the Dirichlet Process (DP) (Ferguson, 1973) is essentially a distribution over distributions, where each draw from a DP is itself a distribution and importantly for clustering applications it serves as a natural prior that lets the number of clusters grow as the data grows. The DP has a base distribution parameter $\beta$ and a strength or concentration parameter $\alpha$.
$\alpha$ is a hyperprior for the DP over per-document topic distributions
$\beta$ is the hyperprior for the DP over per-topic word distributions
$\theta_{m}$ is the topic distribution for document $m$
$\phi_{k}$ is the word distribution for topic $k$
$z_{m,n}$ is the topic for the $n$th word in document $m$
$w_{m,n}$ is the specific word
The generative story for phylogenetics
We are still modeling topics. However, documents become sites and words become transitions. Transitions may defined in nucleotide, amino acid or codon space. Perhaps more logically though a document would be all of the sites for a given gene.
$\alpha$ is a hyperprior for the DP over per-site topic distributions
$\beta$ is the hyperprior for the DP over per-topic transition distributions
$\theta_{m}$ is the topic distribution for gene $m$
$\phi_{k}$ is the nucleotide transition distribution for topic $k$
$z_{m,n}$ is the topic for the $n$th nucleotide transition in gene $m$
$w_{m,n}$ is the specific transition
The generative process
Choose $\theta_{m} \sim \textrm{Dir}(\alpha)$, where $m \in {1,...M}$ and $\textrm{Dir}(\alpha)$ is the Dirichlet distribtion for $\alpha$
Choose $\phi_{k} \sim \textrm{Dir}(\beta)$, where $k \in {1,...K}$
For each of the transition positions ($m$,$n$), where $n \in {1,...N}$, and $m \in {1,...M}$
Choose a topic $z_{m,n} \sim \textrm{Multinomial}(\theta_{m})$
Choose a transition $w_{m,n} \sim \textrm{Multinomial}(\phi_{m,n})$
$\phi$ is a $K*V$ Markov matrix each row of which denotes the transition distribution of a topic.
The type of data to expect
Here I am borrowing from the package LDA, which uses a collapsed version of Gibbs sampling.
In this example.
Positions are documents
First 1000 positions
We consider in place transitions
20 topics
1500 MCMC iterations
7 words in a topic (transitions)
topics
Topic 0: EE ED EQ EK EG DE EA
Topic 1: YY YF FY FF YH YC YS
Topic 2: RR KK RK QQ RQ HH KR
Topic 3: AA AS AT AV SS VA SA
Topic 4: II MM IV IL IM MI ML
Topic 5: SS ST SA SN SP TT SG
Topic 6: WW WL YW WS SW WV WG
Topic 7: KK KR RR KQ RK KE KN
Topic 8: HH HY HQ HR HN QH YH
Topic 9: CC CS SC CY SS CF LC
Topic 10: VV VI IV VA VL II VM
Topic 11: TT TA TS TV TI ST TM
Topic 12: DD DE DN ED EE DG ND
Topic 13: QQ QK QE QH QR QL QP
Topic 14: FF FL LF FY FI FV FC
Topic 15: PP PS SP PA PL PT PQ
Topic 16: NN NS SS ND NK SN NT
Topic 17: LL LI LV LM LF MM LQ
Topic 18: RR RK RQ RT RW RY RV
Topic 19: GG GS GA GE GN SG GD
top topics
position - 0 (top topic: 3)
position - 1 (top topic: 19)
position - 10 (top topic: 3)
position - 100 (top topic: 18)
position - 1000 (top topic: 7)
position - 10000 (top topic: 7)
position - 10001 (top topic: 7)
position - 10002 (top topic: 19)
position - 10003 (top topic: 7)
position - 10004 (top topic: 5)
End of explanation |
15,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
N2 - Eurocode 8, CEN (2005)
This simplified nonlinear procedure for the estimation of the seismic response of structures uses capacity curves and inelastic spectra. This method has been developed to be used in combination with code-based response spectra, but it is also possible to employ it for the assessment of structural response subject to ground motion records. It also has the distinct aspect of assuming an elastic-perfectly plastic force-displacement relationship in the construction of the bilinear curve. This method is part of recommendations of the Eurocode 8 (CEN, 2005) for the seismic design of new structures, and the capacity curves are usually simplified by a elasto-perfectly plastic relationship.
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are
Step4: Obtain the damage probability matrix
The parameter damping_ratio needs to be defined in the cell below in order to calculate the damage probability matrix.
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Plot vulnerability function
Step10: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
from rmtk.vulnerability.derivation_fragility.hybrid_methods.N2 import N2Method
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: N2 - Eurocode 8, CEN (2005)
This simplified nonlinear procedure for the estimation of the seismic response of structures uses capacity curves and inelastic spectra. This method has been developed to be used in combination with code-based response spectra, but it is also possible to employ it for the assessment of structural response subject to ground motion records. It also has the distinct aspect of assuming an elastic-perfectly plastic force-displacement relationship in the construction of the bilinear curve. This method is part of recommendations of the Eurocode 8 (CEN, 2005) for the seismic design of new structures, and the capacity curves are usually simplified by a elasto-perfectly plastic relationship.
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = "../../../../../../rmtk_data/accelerograms"
minT, maxT = 0.1, 2.0
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../../rmtk_data/damage_model.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
damping_ratio = 0.05
PDM, Sds = N2Method.calculate_fragility(capacity_curves, gmrs, damage_model, damping_ratio)
Explanation: Obtain the damage probability matrix
The parameter damping_ratio needs to be defined in the cell below in order to calculate the damage probability matrix.
End of explanation
IMT = "Sa"
period = 0.3
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
2. period: This parameter defines the time period of the fundamental mode of vibration of the structure.
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 3.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
# utils.plot_fragility_stats(fragility_statistics,minIML,maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 3.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
15,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
So far, you've worked with many types of data, including numeric types (integers, floating point values), strings, and the DATETIME type. In this tutorial, you'll learn how to query nested and repeated data. These are the most complex data types that you can find in BigQuery datasets!
Nested data
Consider a hypothetical dataset containing information about pets and their toys. We could organize this information in two different tables (a pets table and a toys table). The toys table could contain a "Pet_ID" column that could be used to match each toy to the pet that owns it.
Another option in BigQuery is to organize all of the information in a single table, similar to the pets_and_toys table below.
In this case, all of the information from the toys table is collapsed into a single column (the "Toy" column in the pets_and_toys table). We refer to the "Toy" column in the pets_and_toys table as a nested column, and say that the "Name" and "Type" fields are nested inside of it.
Nested columns have type STRUCT (or type RECORD). This is reflected in the table schema below.
Recall that we refer to the structure of a table as its schema. If you need to review how to interpret table schema, feel free to check out this lesson from the Intro to SQL micro-course.
To query a column with nested data, we need to identify each field in the context of the column that contains it
Step1: For a description of each field, refer to this data dictionary.
The table has many nested fields, which you can verify by looking at either the data dictionary (hint
Step3: We refer to the "browser" field (which is nested in the "device" column) and the "transactions" field (which is nested inside the "totals" column) as device.browser and totals.transactions in the query below
Step5: By storing the information in the "device" and "totals" columns as STRUCTs (as opposed to separate tables), we avoid expensive JOINs. This increases performance and keeps us from having to worry about JOIN keys (and which tables have the exact data we need).
Now we'll work with the "hits" column as an example of data that is both nested and repeated. Since | Python Code:
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "google_analytics_sample" dataset
dataset_ref = client.dataset("google_analytics_sample", project="bigquery-public-data")
# Construct a reference to the "ga_sessions_20170801" table
table_ref = dataset_ref.table("ga_sessions_20170801")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
Explanation: Introduction
So far, you've worked with many types of data, including numeric types (integers, floating point values), strings, and the DATETIME type. In this tutorial, you'll learn how to query nested and repeated data. These are the most complex data types that you can find in BigQuery datasets!
Nested data
Consider a hypothetical dataset containing information about pets and their toys. We could organize this information in two different tables (a pets table and a toys table). The toys table could contain a "Pet_ID" column that could be used to match each toy to the pet that owns it.
Another option in BigQuery is to organize all of the information in a single table, similar to the pets_and_toys table below.
In this case, all of the information from the toys table is collapsed into a single column (the "Toy" column in the pets_and_toys table). We refer to the "Toy" column in the pets_and_toys table as a nested column, and say that the "Name" and "Type" fields are nested inside of it.
Nested columns have type STRUCT (or type RECORD). This is reflected in the table schema below.
Recall that we refer to the structure of a table as its schema. If you need to review how to interpret table schema, feel free to check out this lesson from the Intro to SQL micro-course.
To query a column with nested data, we need to identify each field in the context of the column that contains it:
- Toy.Name refers to the "Name" field in the "Toy" column, and
- Toy.Type refers to the "Type" field in the "Toy" column.
Otherwise, our usual rules remain the same - we need not change anything else about our queries.
Repeated data
Now consider the (more realistic!) case where each pet can have multiple toys. In this case, to collapse this information into a single table, we need to leverage a different datatype.
We say that the "Toys" column contains repeated data, because it permits more than one value for each row. This is reflected in the table schema below, where the mode of the "Toys" column appears as 'REPEATED'.
Each entry in a repeated field is an ARRAY, or an ordered list of (zero or more) values with the same datatype. For instance, the entry in the "Toys" column for Moon the Dog is [Frisbee, Bone, Rope], which is an ARRAY with three values.
When querying repeated data, we need to put the name of the column containing the repeated data inside an UNNEST() function.
This essentially flattens the repeated data (which is then appended to the right side of the table) so that we have one element on each row. For an illustration of this, check out the image below.
Nested and repeated data
Now, what if pets can have multiple toys, and we'd like to keep track of both the name and type of each toy? In this case, we can make the "Toys" column both nested and repeated.
In the more_pets_and_toys table above, "Name" and "Type" are both fields contained within the "Toys" STRUCT, and each entry in both "Toys.Name" and "Toys.Type" is an ARRAY.
Let's look at a sample query.
Since the "Toys" column is repeated, we flatten it with the UNNEST() function. And, since we give the flattened column an alias of t, we can refer to the "Name" and "Type" fields in the "Toys" column as t.Name and t.Type, respectively.
To reinforce what you've learned, we'll apply these ideas to a real dataset in the section below.
Example
We'll work with the Google Analytics Sample dataset. It contains information tracking the behavior of visitors to the Google Merchandise store, an e-commerce website that sells Google branded items.
We begin by printing the first few rows of the ga_sessions_20170801 table. (We have hidden the corresponding code. To take a peek, click on the "Code" button below.) This table tracks visits to the website on August 1, 2017.
End of explanation
print("SCHEMA field for the 'totals' column:\n")
print(table.schema[5])
print("\nSCHEMA field for the 'device' column:\n")
print(table.schema[7])
Explanation: For a description of each field, refer to this data dictionary.
The table has many nested fields, which you can verify by looking at either the data dictionary (hint: search for appearances of 'RECORD' on the page) or the table preview above.
In our first query against this table, we'll work with the "totals" and "device" columns.
End of explanation
# Query to count the number of transactions per browser
query =
SELECT device.browser AS device_browser,
SUM(totals.transactions) as total_transactions
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801`
GROUP BY device_browser
ORDER BY total_transactions DESC
# Run the query, and return a pandas DataFrame
result = client.query(query).result().to_dataframe()
result.head()
Explanation: We refer to the "browser" field (which is nested in the "device" column) and the "transactions" field (which is nested inside the "totals" column) as device.browser and totals.transactions in the query below:
End of explanation
# Query to determine most popular landing point on the website
query =
SELECT hits.page.pagePath as path,
COUNT(hits.page.pagePath) as counts
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801`,
UNNEST(hits) as hits
WHERE hits.type="PAGE" and hits.hitNumber=1
GROUP BY path
ORDER BY counts DESC
# Run the query, and return a pandas DataFrame
result = client.query(query).result().to_dataframe()
result.head()
Explanation: By storing the information in the "device" and "totals" columns as STRUCTs (as opposed to separate tables), we avoid expensive JOINs. This increases performance and keeps us from having to worry about JOIN keys (and which tables have the exact data we need).
Now we'll work with the "hits" column as an example of data that is both nested and repeated. Since:
- "hits" is a STRUCT (contains nested data) and is repeated,
- "hitNumber", "page", and "type" are all nested inside the "hits" column, and
- "pagePath" is nested inside the "page" field,
we can query these fields with the following syntax:
End of explanation |
15,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Causal Inference of Kang-Schafer simulation.
<table align="left">
<td>
<a target="_blank" href="https
Step1: Correctly Specified Model
We run the simulation 1000 times under correctly specified logistic propensity score and linear outcome regression. For each simulation we estimate the average treatment effect on the treated (ATT) using empirical calibration.
Step2: The mean of the 1000 ATT estimates after weight correction is very close to the true zero ATT.
Step3: Misspecified Model
If the transformed covariates are observed in place of the true covariates, both the propensity score model and outcome regression model become misspecified. We run 1000 simulations and for each simulation estimate the ATT by balancing the transformed covariates. The causal estimate is no longer unbiased.
Step4: One reasonable strategy is to expand the set of balancing covariates and hope it will make the model less "misspecified". If we additional balance the two-way interactions and the log transformation, the bias indeed reduces.
Step5: If the model was misspecified in the sense that more covariates are included than necessary, the causal estimate remains unbiased.
Step6: Benchmark Execution Time
The execution time is generally linear with respect to the sample size. With 1 million control units, it takes around 1 second to find the weights. | Python Code:
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import patsy
import seaborn as sns
import timeit
# install and import ec
!pip install -q git+https://github.com/google/empirical_calibration
import empirical_calibration as ec
sns.set_style('whitegrid')
%config InlineBackend.figure_format='retina'
Explanation: Causal Inference of Kang-Schafer simulation.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/empirical_calibration/blob/master/notebooks/causal_inference_kang_schafer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/empirical_calibration/blob/master/notebooks/causal_inference_kang_schafer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Causal Inference of Kang-Schafer simulation.
Imports
Correctly Specified Model
Misspecified Model
Benchmark Execution Time
We illustrate empirical calibration to estimate the average treatment effect on the treated (ATT) on Kang-Schafer simulation under both correctly specified and misspecified models, and benchmark the execution time. For details of simulation setup, please refer to kang_schafer_population_mean.ipynb.
Imports
End of explanation
def estimate_att(formula):
simulation = ec.data.kang_schafer.Simulation(size=1000)
t = simulation.treatment
y = simulation.outcome
df = pd.DataFrame(
np.column_stack(
[simulation.covariates, simulation.transformed_covariates]))
df.columns = ["z1", "z2", "z3", "z4", "x1", "x2", "x3", "x4"]
x = patsy.dmatrix(formula, df, return_type="dataframe").values
weights = ec.maybe_exact_calibrate(covariates=x[t == 0],
target_covariates=x[t == 1])[0]
return (np.mean(y[t == 1]) - np.mean(y[t == 0]),
np.mean(y[t == 1]) - np.sum(y[t == 0] * weights))
def show_estimates(estimates, col='weighted'):
ax = estimates[col].hist(bins=20, alpha=0.8, edgecolor='none')
plt.axvline(estimates[col].mean(), linestyle='dashed', color='red')
print('bias of {} is {}'.format(col, estimates[col].mean()))
print('rmse of {} is {}'.format(col, np.sqrt(np.mean((estimates[col] - 0.) ** 2))))
estimates = pd.DataFrame(
[estimate_att("-1 + z1 + z2 + z3 + z4") for i in xrange(1000)])
estimates.columns = ['raw', 'weighted']
Explanation: Correctly Specified Model
We run the simulation 1000 times under correctly specified logistic propensity score and linear outcome regression. For each simulation we estimate the average treatment effect on the treated (ATT) using empirical calibration.
End of explanation
show_estimates(estimates,'raw')
show_estimates(estimates,'weighted')
Explanation: The mean of the 1000 ATT estimates after weight correction is very close to the true zero ATT.
End of explanation
estimates_miss = pd.DataFrame([estimate_att("-1 + x1 + x2 + x3 + x4") for i in xrange(1000)])
estimates_miss.columns = ['raw', 'weighted']
show_estimates(estimates_miss)
Explanation: Misspecified Model
If the transformed covariates are observed in place of the true covariates, both the propensity score model and outcome regression model become misspecified. We run 1000 simulations and for each simulation estimate the ATT by balancing the transformed covariates. The causal estimate is no longer unbiased.
End of explanation
formula = ("-1 + (x1 + x2 + x3 + x4)**2 + I(np.log(x1)) + I(np.log(x2)) + "
"I(np.log(x3)) + I(np.log(x4))")
estimates_expanded = pd.DataFrame([estimate_att(formula) for i in xrange(1000)])
estimates_expanded.columns = ['raw', 'weighted']
show_estimates(estimates_expanded)
Explanation: One reasonable strategy is to expand the set of balancing covariates and hope it will make the model less "misspecified". If we additional balance the two-way interactions and the log transformation, the bias indeed reduces.
End of explanation
formula = "-1 + z1 + z2 + z3 + z4 + x1 + x2 + x3 + x4"
estimates_redundant = pd.DataFrame([estimate_att(formula) for i in range(1000)])
estimates_redundant.columns = ['raw', 'weighted']
show_estimates(estimates_redundant)
Explanation: If the model was misspecified in the sense that more covariates are included than necessary, the causal estimate remains unbiased.
End of explanation
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=2000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
pd.Series(timeit.repeat(
'ec.maybe_exact_calibrate(x0, x1)',
setup='from __main__ import x1, x0, ec',
repeat=100,
number=1)).describe()
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=20000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
pd.Series(timeit.repeat(
'ec.maybe_exact_calibrate(x0, x1)',
setup='from __main__ import x1, x0, ec',
repeat=100,
number=1)).describe()
Explanation: Benchmark Execution Time
The execution time is generally linear with respect to the sample size. With 1 million control units, it takes around 1 second to find the weights.
End of explanation |
15,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DL Indaba Practical 3
Convolutional Neural Networks
Developed by Stephan Gouws, Avishkar Bhoopchand & Ulrich Paquet.
Introduction
In this practical we will cover the basics of convolutional neural networks, or "ConvNets". ConvNets were invented in the late 1980s/early 1990s, and have had tremendous success especially with vision (although they have also been used to great success in speech processing pipelines, and more recently, for machine translation).
We will work to build our mathematical and algorithmic intuition around the "convolution" operation. Then we will construct a deep feedforward convolutional model with which we can classify MNIST digits with over 99% accuracy (our best model yet!).
Learning objectives
Understand
Step1: ConvNet Architectures
When modelling an image using a regular feed-forward network, we quickly find that the number of model parameters grows exponentially. For example, our 2 layer MNIST feed-forward model from the previous practical already had over 600 000 parameters!
QUESTION
Step3: Let's test our layer on some dummy data
Step4: The derivative of a convolutional layer
Assume we have some final loss function L and by following the steps of backpropagation, have computed the derivative of this loss up to the output of our convolutional layer ($\frac{\partial L}{\partial O}$ or dout in the code below). In order to update the parameters of our layer, we require the derivative of L with respect to the weights and biases of the convolutional layer ($\frac{\partial L}{\partial W}$ and $\frac{\partial L}{\partial b}$). We also require the derivative with respect to the inputs of the layer ($\frac{\partial L}{\partial X}$) in order to propagate the error back to the preceding layers. Unfortunately calculating these derivatives can be a little fiddly due to having to keep track of multiple indices. The calculus is very basic though!
We start with the easiest one, $\frac{\partial L}{\partial b}$
Step6: Finally, we test the backward pass using numerical gradient checking. This compares the gradients generated by our backward function, with a numerical approximation obtained by treating our forward function as a "black box". This gradient checking is a very important testing tool when building your own neural network components or back-propagation system!
Step7: (Max) Pooling Layers
The purpose of a pooling layer is to is to reduce the spatial size of the representation and therefore control the number of parameters in the network. A pooling layer has no trainable parameters itself. It applies some 2D aggegation operation (usually a MAX, but others like average may also be used) to regions of the input volume. This is done independently for each depth dimension of the input. For example, a 2x2 max pooling operation with a stride of 2, downsamples every depth slice of the input by 2 along both the width and height.
The output volume of a pooling layer alwyas has the same depth as the input volume. The width and height are calcualted as follows
Step8: Now we can test the max_pool_forward function.
Step9: The derivative of a max-pool layer
The max-pooling layer has no learnable parameters of its own, so the only derivative of concern is that of the output of the layer with respect to the input for the purpose of backpropagating the error through the layer. This is easy to calculate as it only requires that we recalculate (or remember) which value in each block was the maximum. Since each output depends only on one value in some FxF block of the input, the gradients of the max-pool layer will be sparse.
Let's implement the backward pass in Numpy
Step10: And we again use numerical gradient checking to ensure that the backward function is correct
Step12: Optimisation - an exercise for later
Our implementations of convolutional and max-pool layers were based on loops, which are easy to understand, but are slow and inefficient compared to a vectorised implementation exploiting matrix multiplications. The vectorised form is how these layers are actually implemented in practice and are also required to make efficient use of GPUs in frameworks that support it, like TensorFlow. As an exercise, once you fully understand how the layers work, try to rewrite the code such that the convolution and max-pool operations are each implemented in a single matrix multiplication.
(HINT
Step14: Lets also bring in the training and plotting routines we developed in Prac 2
Step17: Now define some helper functions to build a convolutional layer and a linear layer (this is mostly the same as the previous practical, but we use slightly different weight and bias initializations which seem to work better with ConvNets on MNIST). In terms of regularisation, we use dropout rather than the L2 regularisation from the previous practical.
Dropout
Dropout is a neural-network regularisation technique that is applied during model training. At each training step, a proportion (1-keep_prob) of neurons the network are "dropped out" (their inputs and outputs are set to 0, effectively ignoring their contribution) while the remaining keep_prob fraction are "let through" (Nit
Step18: Now build the ConvNetClassifier, we make the number of convolutional layers and the filter sizes parameters so that you can easily experiment with different variations.
Step19: Finally a function that wraps up the training and evaluation of the model (the same as prac 2)
Step20: Now train and evaluate the ConvNet model on MNIST.
NOTE
Step21: The ConvNet classifier takes quite a long time to train, but gives a very respectable test accuracy of over 99%!
What has the network learned?
Remember that a filter in a convolutional layer is used to multiply blocks in the input volume. Let's plot the weights of the first layer of the trained model. Darker pixels indicate that the particular filter reacts more strongly to those regions of the input blocks. Notice how each filter has learned to react differently to different patterns in the input. It's tricky to see in our tiny filters, but those in lower layers of ConvNets, particularly when applied to natural images, often function as simple Gabor filters or edge detectors, while filters in higher layers often react to more abstract shapes and concepts.
Step22: Aside
Step23: Batch Normalisation
Batch normalisation (batch norm) is a more recent (2015) and arguably more powerful normalisation technique than dropout. It is based on the observation that machine learning models often perform better and train faster when their inputs are normalised to have 0 mean and unit variance. In multi-layered deep neural networks, the output of one layer becomes the input to the next. The insight behind batch norm is that each of these layer inputs can also be normalised. Batch norm has been shown to have numerous benefits including | Python Code:
# Import TensorFlow and some other libraries we'll be using.
import datetime
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Import Matplotlib and set some defaults
from matplotlib import pyplot as plt
plt.ioff()
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Download the MNIST dataset onto the local machine.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: DL Indaba Practical 3
Convolutional Neural Networks
Developed by Stephan Gouws, Avishkar Bhoopchand & Ulrich Paquet.
Introduction
In this practical we will cover the basics of convolutional neural networks, or "ConvNets". ConvNets were invented in the late 1980s/early 1990s, and have had tremendous success especially with vision (although they have also been used to great success in speech processing pipelines, and more recently, for machine translation).
We will work to build our mathematical and algorithmic intuition around the "convolution" operation. Then we will construct a deep feedforward convolutional model with which we can classify MNIST digits with over 99% accuracy (our best model yet!).
Learning objectives
Understand:
* what a convolutional layer is & how it's different from a fully-connected layer (including the assumptions and trade-offs that are being made),
* how and when to use convolutional layers (relate it to the assumptions the model makes),
* how backpropagation works through convolutional layers.
What is expected of you:
Read through the explanations and make sure you understand how to implement the convolutional forwards pass.
Do the same for the backwards phase.
Train a small model on MNIST.
At this point, flag a tutor and they will give you access to a GPU instance. Now use the hyperparameters provided to train a state-of-the-art ConvNet model on MNIST.
End of explanation
## IMPLEMENT-ME: ...
# Conv layer forward pass
def convolutional_forward(X, W, b, filter_size, depth, stride, padding):
# X has size [batch_size, input_width, input_height, input_depth]
# W has shape [filter_size, filter_size, input_depth, depth]
# b has shape [depth]
batch_size, input_width, input_height, input_depth = X.shape
# Check that the weights are of the expected shape
assert W.shape == (filter_size, filter_size, input_depth, depth)
# QUESTION: Calculate the width and height of the output
# output_width = ...
# output_height = ...
#
# ANSWER:
output_width = (input_width - filter_size + 2*padding) / stride + 1
output_height = (input_height - filter_size + 2*padding) / stride + 1
####
# Apply padding to the width and height dimensions of the input
X_padded = np.pad(X, ((0,0), (padding, padding), (padding, padding), (0,0)), 'constant')
# Allocate the output Tensor
out = np.zeros((batch_size, output_width, output_height, depth))
# NOTE: There is a more efficient way of doing a convolution, but this most
# clearly illustrates the idea.
for i in range(output_width): # Loop over the output width dimension
for j in range(output_height): # Loop over the output height dimension
# Select the current block in the input that the filter will be applied to
block_width_start = i * stride
block_width_end = block_width_start + filter_size
block_height_start = j * stride
block_height_end = block_height_start + filter_size
block = X_padded[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
for d in range(depth): # Loop over the filters in the layer (output depth dimension)
filter_weights = W[:, :, :, d]
# QUESTION: Apply the filter to the block over all inputs in the batch
# out[:, w, h, f] = ...
# HINT: Have a look at numpy's sum function and pay attention to the axis parameter
# ANSWER:
out[:, i, j, d] = np.sum(block * filter_weights, axis=(1,2,3)) + b[d]
###
return out
Explanation: ConvNet Architectures
When modelling an image using a regular feed-forward network, we quickly find that the number of model parameters grows exponentially. For example, our 2 layer MNIST feed-forward model from the previous practical already had over 600 000 parameters!
QUESTION: How many parameters would a feed-forward network require if it had 2 hidden layers with 512 and 256 neurons respectively, an output size of 10 and an input image of shape [32, 32, 3]?
ConvNets address this model parameter issue by exploiting structure in the inputs to the network (in particular, by making the assumption that the input is a 3D volume, which applies to images for example). The two key differences between a ConvNet and a Feed-forward network are:
* ConvNets have neurons that are arranged in 3 dimensions: width, height, depth (depth here means the depth of an activation volume, not the depth of a deep neural network!)
* The neurons in each layer are only connected to a small region of the layer before it.
QUESTION: Unfortunately there is no such thing as a free lunch. What do you think is the trade-off a ConvNet makes for the reduction in memory required by fewer parameters?
Generally a ConvNet architecture is made up of different types of layers, the most common being convolutional layers, pooling layers and fully connected layers that we encountered in the last practical.
ConvNet architectures were key to the tremendous success of deep learning in machine vision. In particular, the first deep learning model to win the ImageNet competition in 2012 was called AlexNet (after Alex Krizhevsky, one of its inventors). It had 5 convolutional layers followed by 3 fully connected layers. Later winners included GoogLeNet and ResNet which also used batch normalisation, a technique we will see in this practical. If you're curious, have a look at this link for a great summary of different ConvNet archiectures.
We will start by implementing the forward and backward passes of these layers in Numpy to get a good sense for how they work. Afterwards, we will implement a full ConvNet classifier in TensorFlow that we will apply to the MNIST dataset. This model should give us the best test accuracy we've seen so far!
Convolutional Layers
A convolutional layer maps an input volume* to an output volume through a set of learnable filters, which make up the parameters of the layer. Every filter is small spatially (along width and height), but extends through the full depth of the input volume. (Eg: A filter in the first layer of a ConvNet might have size [5, 5, 3]). During the forward pass, we convolve ("slide") each filter across the width and height of the input volume and compute dot products between the entries of the filter and the input at any position. As we slide the filter over the width and height of the input volume we will produce a 2-dimensional activation map that gives the responses of that filter at every spatial position. Each convolutional layer will have a set of filters, and each of them will produce a separate 2-dimensional activation map. We will stack these activation maps along the depth dimension to produce the output volume.
The following diagram and animation illustrates these ideas, make sure you understand them!
* An input volume refers to a 3 dimensional input. For example, a colour image is often represented as a 3 dimensional tensor of shape [width, height, channels] where channels refers to the colour values. A common colour encoding is RGB which has a value between 0 and 256 for each of the red, green and blue channels.
What size is the output volume?
The size of the output volume is controlled by the hyperparameters of the convolutional layer:
* Filter Size (F) defines the width and height of the filters in the layer. Note that filters always have the same depth as the inputs to the layer.
Depth (D) of the layer defines the number of filters in the layer.
* Stride (S) defines the number of pixels by which we move the filter when "sliding" it along the input volume. Typically this value would be 1, but values of 2 and 3 are also sometimes used.
* Padding* (P) refers to the number of 0 pixels we add to the input volume along the width and height dimensions. This parameter is useful in that it gives us more control over the desired size of the output volume and in fact is often used to ensure that the output volume has the same width and height as the input volume.
If the width of the input volume is $w$, the width of the output volume will be $(w−F+2P)/S+1$. (QUESTION: Why?). Similarly for the height ($h$).
QUESTION: What is the final 3D shape of the output volume?
Implementing the forward pass
The parameters of a convolutional layer, with padded input $X^{pad}$, are stored in a weight tensor, $W$ of shape $[F, F, I, D]$ and bias vector $b$ of shape $[D]$ where I is the depth of $X$.
For each filter $d \in [0,D)$ in our convolutional layer, the value of the output volume ($O$) at position $(i, j, d)$ is given by:
\begin{align}
O_{ij}^d = b_{d} + \sum_{a=0}^{F-1} \sum_{b=0}^{F-1} \sum_{c=0}^{I-1} W_{a, b, c, d} X^{pad}_{i+a, j+b, c} && (1)
\end{align}
Don't be put off by all the notation, it's actually quite simple, see if you can tie this formula to the explanation of the convolutional layer and diagrams you saw earlier.
QUESTION: The formula above assumed a stride size of 1 for simplicity. Can you modify the formula to work with an arbitrary stride?
Now let's implement the forward pass of a convolutional layer in Numpy:
End of explanation
### Hyperparameters
batch_size = 2
input_width = 4
input_height = 4
input_depth = 3
filter_size = 4
output_depth = 3
stride = 2
padding = 1
###
# Create a helper function that calculates the relative error between two arrays
def relative_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Define the shapes of the input and weights
input_shape = (batch_size, input_width, input_height, input_depth)
w_shape = (filter_size, filter_size, input_depth, output_depth)
# Create the dummy input
X = np.linspace(-0.1, 0.5, num=np.prod(input_shape)).reshape(input_shape)
# Create the weights and biases
W = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=output_depth)
# Get the output of the convolutional layer
out = convolutional_forward(X, W, b, filter_size, output_depth, stride, padding)
correct_out = np.array(
[[[[8.72013250e-02, 2.37300699e-01, 3.87400074e-01],
[1.34245123e-01, 2.86133235e-01, 4.38021347e-01]],
[[8.21928598e-02, 2.39447184e-01, 3.96701509e-01],
[4.47552448e-04, 1.59490615e-01, 3.18533677e-01]]],
[[[1.11179021e+00, 1.29050939e+00, 1.46922856e+00],
[9.01255797e-01, 1.08176371e+00, 1.26227162e+00]],
[[7.64688995e-02, 2.62343025e-01, 4.48217151e-01],
[-2.62854619e-01, -7.51917556e-02, 1.12471108e-01]]]])
# Compare your output to the "correct" ones
# The difference should be around 2e-8 (or lower)
print 'Testing convolutional_forward'
diff = relative_error(out, correct_out)
if diff <= 2e-8:
print 'PASSED'
else:
print 'The difference of %s is too high, try again' % diff
Explanation: Let's test our layer on some dummy data:
End of explanation
## IMPLEMENT-ME: ...
def convolutional_backward(dout, X, W, b, filter_size, depth, stride, padding):
batch_size, input_width, input_height, input_depth = X.shape
# Apply padding to the width and height dimensions of the input
X_padded = np.pad(X, ((0,0), (padding, padding), (padding, padding), (0,0)), 'constant')
# Calculate the width and height of the forward pass output
output_width = (input_width - filter_size + 2*padding) / stride + 1
output_height = (input_height - filter_size + 2*padding) / stride + 1
# Allocate output arrays
# QUESTION: What is the shape of dx? dw? db?
# ANSWER: ...
dx_padded = np.zeros_like(X_padded)
dw = np.zeros_like(W)
db = np.zeros_like(b)
# QUESTION: Calculate db, the derivative of the final loss with respect to the bias term
# HINT: Have a look at the axis parameter of the np.sum function.
db = np.sum(dout, axis = (0, 1, 2))
for i in range(output_width):
for j in range(output_height):
# Select the current block in the input that the filter will be applied to
block_width_start = i*stride
block_width_end = block_width_start+filter_size
block_height_start = j*stride
block_height_end = block_height_start + filter_size
block = X_padded[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
for d in range(depth):
# QUESTION: Calculate dw[:,:,:,f], the derivative of the loss with respect to the weight parameters of the f'th filter.
# HINT: You can do this in a loop if you prefer, or use np.sum and "None" indexing to get your result to the correct
# shape to assign to dw[:,:,:,f], see (https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis)
dw[:,:,:,d] += np.sum(block*(dout[:,i,j,d])[:,None,None,None], axis=0)
dx_padded[:,block_width_start:block_width_end, block_height_start:block_height_end, :] += np.einsum('ij,klmj->iklm', dout[:,i,j,:], W)
# Now we remove the padding to arrive at dx
dx = dx_padded[:,padding:-padding, padding:-padding, :]
return dx, dw, db
Explanation: The derivative of a convolutional layer
Assume we have some final loss function L and by following the steps of backpropagation, have computed the derivative of this loss up to the output of our convolutional layer ($\frac{\partial L}{\partial O}$ or dout in the code below). In order to update the parameters of our layer, we require the derivative of L with respect to the weights and biases of the convolutional layer ($\frac{\partial L}{\partial W}$ and $\frac{\partial L}{\partial b}$). We also require the derivative with respect to the inputs of the layer ($\frac{\partial L}{\partial X}$) in order to propagate the error back to the preceding layers. Unfortunately calculating these derivatives can be a little fiddly due to having to keep track of multiple indices. The calculus is very basic though!
We start with the easiest one, $\frac{\partial L}{\partial b}$:
\begin{align}
\frac{\partial L}{\partial b} &= \frac{\partial L}{\partial O} \frac{\partial O}{\partial b} && \vartriangleright \text{(Chain Rule)} \
&= \frac{\partial L}{\partial O} \mathbf{1} && \vartriangleright (\frac{\partial O}{\partial b} = 1 \text{ from equation } (1))
\end{align}
Now we tackle $\frac{\partial L}{\partial W}$:
\begin{align}
\frac{\partial L}{\partial W} &= \frac{\partial L}{\partial O} \frac{\partial O}{\partial W} && \vartriangleright \text{(Chain Rule)}
\end{align}
Let's calculate this derivative with respect to a single point $W_{abcd}$ in our weight tensor ($O_w$ and $O_h$ are the output width and height respectively):
\begin{align}
\frac{\partial L}{\partial W_{abcd}} &= \sum_{i=0}^{O_w-1} \sum_{j=0}^{O_h-1} \frac{\partial L}{\partial O_{ij}^d} \frac{\partial O_{ij}^d}{\partial W_{abcd}}
\end{align}
QUESTION: Why do we sum over the outputs here? HINT: Think about how many times a particular weight gets used.
Now, looking at equation $(1)$, we can easily calculate $\frac{\partial O_{ij}^d}{\partial W_{abcd}}$ as:
\begin{align}
\frac{\partial O_{ij}^d}{\partial W_{abcd}} &= X^{pad}_{i+a, j+b, c}
\end{align}
Which gives a final result of:
\begin{align}
\frac{\partial L}{\partial W_{abcd}} &= \sum_{i=0}^{O_w-1} \sum_{j=0}^{O_h-1} \frac{\partial L}{\partial O_{ij}^d} X^{pad}_{i+a, j+b, c}
\end{align}
Finally, we need $\frac{\partial L}{\partial X}$, the derivative of the loss with respect to the input of the layer. This is sometimes also called a "delta". Remember, that before doing the convolution, we applied padding to the input $X$ to get $X^{pad}$. It's easier to calculate the derivative with respect to $X^{pad}$, which appears in our convolution equation, and then remove the padding later on to arrive at the delta. Unfortunately we need to introduce some more indexing for the individual components of $X^{pad}$:
\begin{align}
\frac{\partial L}{\partial X^{pad}{mnc}} &= \sum{i=0}^{O_w-1} \sum_{j=0}^{O_h-1} \sum_{d=0}^{D-1} W_{m-i, n-j, c, d} \frac{\partial L}{\partial O_{ij}^d}
\end{align}
Where do the indices $m-i$ and $n-j$ come from? Notice in equation $(1)$ that the padded input $X^{pad}{i+a, j+b, c}$ is multiplied by the weight $W{abcd}$. Now, when we index $X^{pad}$ with $m$ and $n$, setting $m=i+a$ and $n=j+b$ gives us $a=m-i$ and $b=n-j$ for the indices of $W$!
Phew! Spend a few minutes to understand these equations, particularly where the indices come from. Ask a tutor if you get stuck!
Note: Did you notice that the delta, $\frac{\partial L}{\partial X^{pad}{mnc}}$ looks suspiciously like the convolutional forward equation with the inputs $X^{pad}$ replaced by $\frac{\partial L}{\partial O{ij}^d}$ and different indexing into the weights? In fact the delta is exactly that, the forward convolution applied to the incoming derivative, with the filters flipped along the width and height axes.
Now let's implement this in Numpy:
End of explanation
def eval_numerical_gradient_array(f, x, df, h=1e-5):
Evaluate a numeric gradient for a function that accepts a numpy
array and returns a numpy array.
# QUESTION: Can you describe intuitively what this function is doing?
grad = np.zeros_like(x)
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
ix = it.multi_index
oldval = x[ix]
x[ix] = oldval + h
pos = f(x).copy()
x[ix] = oldval - h
neg = f(x).copy()
x[ix] = oldval
grad[ix] = np.sum((pos - neg) * df) / (2 * h)
it.iternext()
return grad
np.random.seed(231)
# Normally, backpropagation will have calculated a derivative of the final loss with respect to
# the output of our layer. Since we're testing our layer in isolation here, we'll just pretend
# and use a random value
dout = np.random.randn(2, 2, 2, 3)
dx_num = eval_numerical_gradient_array(lambda x: convolutional_forward(X, W, b, filter_size, output_depth, stride, padding), X, dout)
dw_num = eval_numerical_gradient_array(lambda w: convolutional_forward(X, W, b, filter_size, output_depth, stride, padding), W, dout)
db_num = eval_numerical_gradient_array(lambda b: convolutional_forward(X, W, b, filter_size, output_depth, stride, padding), b, dout)
out = convolutional_forward(X, W, b, filter_size, output_depth, stride, padding)
dx, dw, db = convolutional_backward(dout, X, W, b, filter_size, output_depth, stride, padding)
# Your errors should be around 1e-8'
print('Testing conv_backward_naive function')
dx_diff = relative_error(dx, dx_num)
if dx_diff < 1e-8:
print 'dx check: PASSED'
else:
print 'The difference of %s on dx is too high, try again!' % dx_diff
dw_diff = relative_error(dw, dw_num)
if dw_diff < 1e-8:
print 'dw check: PASSED'
else:
print 'The difference of %s on dw is too high, try again!' % dw_diff
db_diff = relative_error(db, db_num)
if db_diff < 1e-8:
print 'db check: PASSED'
else:
print 'The difference of %s on db is too high, try again!' % db_diff
Explanation: Finally, we test the backward pass using numerical gradient checking. This compares the gradients generated by our backward function, with a numerical approximation obtained by treating our forward function as a "black box". This gradient checking is a very important testing tool when building your own neural network components or back-propagation system!
End of explanation
def max_pool_forward(X, pool_size, stride):
batch_size, input_width, input_height, input_depth = X.shape
# Calculate the output dimensions
output_width = (input_width - pool_size)/stride + 1
output_height = (input_height - pool_size)/stride + 1
# Allocate the output array
out = np.zeros((batch_size, output_width, output_height, input_depth))
# Select the current block in the input that the filter will be applied to
for w in range(output_width):
for h in range(output_height):
block_width_start = w*stride
block_width_end = block_width_start+pool_size
block_height_start = h*stride
block_height_end = block_height_start + pool_size
block = X[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
## IMPLEMENT-ME CANDIDATE
out[:,w,h,:] = np.max(block, axis=(1,2))
return out
Explanation: (Max) Pooling Layers
The purpose of a pooling layer is to is to reduce the spatial size of the representation and therefore control the number of parameters in the network. A pooling layer has no trainable parameters itself. It applies some 2D aggegation operation (usually a MAX, but others like average may also be used) to regions of the input volume. This is done independently for each depth dimension of the input. For example, a 2x2 max pooling operation with a stride of 2, downsamples every depth slice of the input by 2 along both the width and height.
The output volume of a pooling layer alwyas has the same depth as the input volume. The width and height are calcualted as follows:
$(W−F)/S+1$ where W is the width/height of the
Implementing the forward pass
We again implement this in Numpy:
End of explanation
### Hyperparameters
batch_size = 2
input_width = 4
input_height = 4
input_depth = 3
pool_size = 2
stride = 2
###
input_shape = (batch_size, input_width, input_height, input_depth)
X = np.linspace(-0.3, 0.4, num=np.prod(input_shape)).reshape(input_shape)
out = max_pool_forward(X, pool_size, stride)
correct_out = np.array([
[[[-0.18947368, -0.18210526, -0.17473684],
[-0.14526316, -0.13789474, -0.13052632]],
[[-0.01263158, -0.00526316, 0.00210526],
[0.03157895, 0.03894737, 0.04631579]]],
[[[0.16421053, 0.17157895, 0.17894737],
[0.20842105, 0.21578947, 0.22315789]],
[[0.34105263, 0.34842105, 0.35578947],
[0.38526316, 0.39263158, 0.4]]]])
# Compare the output. The difference should be less than 1e-6.
print('Testing max_pool_forward function:')
diff = relative_error(out, correct_out)
if diff < 1e-6:
print 'PASSED'
else:
print 'The difference of %s is too high, try again!' % diff
Explanation: Now we can test the max_pool_forward function.
End of explanation
def max_pool_backward(dout, X, max_pool_output, pool_size, stride):
batch_size, input_width, input_height, input_depth = X.shape
# Calculate the output dimensions
output_width = (input_width - pool_size)/stride + 1
output_height = (input_height - pool_size)/stride + 1
# QUESTION: What is the size of dx, the derivative with respect to x?
# Allocate an array to hold the derivative
dx = np.zeros_like(X)
for w in range(output_width):
for h in range(output_height):
# Which block in the input did the value at the forward pass output come from?
block_width_start = w*stride
block_width_end = block_width_start+pool_size
block_height_start = h*stride
block_height_end = block_height_start + pool_size
block = X[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
# What was the maximum value
max_val = max_pool_output[:, w, h, :]
# Which values in the input block resulted in the output?
responsible_values = block == max_val[:, None, None, :]
# Add the contribution of the current block to the gradient
dx[:,block_width_start:block_width_end,block_height_start:block_height_end, :] += responsible_values * (dout[:,w,h,:])[:,None,None,:]
return dx
Explanation: The derivative of a max-pool layer
The max-pooling layer has no learnable parameters of its own, so the only derivative of concern is that of the output of the layer with respect to the input for the purpose of backpropagating the error through the layer. This is easy to calculate as it only requires that we recalculate (or remember) which value in each block was the maximum. Since each output depends only on one value in some FxF block of the input, the gradients of the max-pool layer will be sparse.
Let's implement the backward pass in Numpy:
End of explanation
# Define a hypothetical derivative of the loss function with respect to the output of the max-pooling layer.
dout = np.random.randn(batch_size, pool_size, pool_size, input_depth)
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward(x, pool_size, stride), X, dout)
out = max_pool_forward(X, pool_size, stride)
dx = max_pool_backward(dout, X, out, pool_size, stride)
# Your error should be less than 1e-12
print('Testing max_pool_backward function:')
diff = relative_error(dx, dx_num)
if diff < 1e-12:
print 'PASSED'
else:
print 'The diff of %s is too large, try again!' % diff
Explanation: And we again use numerical gradient checking to ensure that the backward function is correct:
End of explanation
class BaseSoftmaxClassifier(object):
def __init__(self, input_size, output_size):
# Define the input placeholders. The "None" dimension means that the
# placeholder can take any number of images as the batch size.
self.x = tf.placeholder(tf.float32, [None, input_size])
self.y = tf.placeholder(tf.float32, [None, output_size])
# We add an additional input placeholder for Dropout regularisation
self.keep_prob = tf.placeholder(tf.float32, name="keep_prob")
# And one for bath norm regularisation
self.is_training = tf.placeholder(tf.bool, name="is_training")
self.input_size = input_size
self.output_size = output_size
# You should override these in your build_model() function.
self.logits = None
self.predictions = None
self.loss = None
self.build_model()
def get_logits(self):
return self.logits
def build_model(self):
# OVERRIDE THIS FOR YOUR PARTICULAR MODEL.
raise NotImplementedError("Subclasses should implement this function!")
def compute_loss(self):
All models share the same softmax cross-entropy loss.
assert self.logits is not None # Ensure that logits has been created!
data_loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.y))
return data_loss
def accuracy(self):
# Calculate accuracy.
assert self.predictions is not None # Ensure that pred has been created!
correct_prediction = tf.equal(tf.argmax(self.predictions, 1), tf.argmax(self.y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
return accuracy
Explanation: Optimisation - an exercise for later
Our implementations of convolutional and max-pool layers were based on loops, which are easy to understand, but are slow and inefficient compared to a vectorised implementation exploiting matrix multiplications. The vectorised form is how these layers are actually implemented in practice and are also required to make efficient use of GPUs in frameworks that support it, like TensorFlow. As an exercise, once you fully understand how the layers work, try to rewrite the code such that the convolution and max-pool operations are each implemented in a single matrix multiplication.
(HINT: Matlab has a function called "im2col" that rearranges blocks of an image into columns, you will need to achieve something similar using Numpy!)
Building a 2-layer ConvNet in TensorFlow
Now that we understand the convolutional and max pool layers, let's switch back to TensorFlow and build a 2-layer ConvNet classifier that we can apply to MNIST. We reuse essentially the same classifier framework we used in Practical 2 as well as the training and plotting functions, but we have added support for 2 new forms of regularisation, dropout and batch normalisation. These are explained in more detail later.
End of explanation
def train_tf_model(tf_model,
session, # The active session.
num_epochs, # Max epochs/iterations to train for.
batch_size=100, # Number of examples per batch.
keep_prob=1.0, # (1. - dropout) probability, none by default.
optimizer_fn=None, # TODO(sgouws): more correct to call this optimizer_obj
report_every=1, # Report training results every nr of epochs.
eval_every=1, # Evaluate on validation data every nr of epochs.
stop_early=True, # Use early stopping or not.
verbose=True):
# Get the (symbolic) model input, output, loss and accuracy.
x, y = tf_model.x, tf_model.y
loss = tf_model.loss
accuracy = tf_model.accuracy()
# Compute the gradient of the loss with respect to the model parameters
# and create an op that will perform one parameter update using the specific
# optimizer's update rule in the direction of the gradients.
if optimizer_fn is None:
optimizer_fn = tf.train.AdamOptimizer(1e-4)
# For batch normalisation: Ensure that the mean and variance tracking
# variables get updated at each training step
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer_step = optimizer_fn.minimize(loss)
# Get the op which, when executed, will initialize the variables.
init = tf.global_variables_initializer()
# Actually initialize the variables (run the op).
session.run(init)
# Save the training loss and accuracies on training and validation data.
train_costs = []
train_accs = []
val_costs = []
val_accs = []
mnist_train_data = mnist.train
prev_c_eval = 1000000
# Main training cycle.
for epoch in range(num_epochs):
avg_cost = 0.
avg_acc = 0.
total_batch = int(mnist.train.num_examples / batch_size)
# Loop over all batches.
for i in range(total_batch):
batch_x, batch_y = mnist_train_data.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value),
# and compute the accuracy of the model.
feed_dict = {x: batch_x, y: batch_y, tf_model.keep_prob: keep_prob,
tf_model.is_training: True}
_, c, a = session.run(
[optimizer_step, loss, accuracy], feed_dict=feed_dict)
# Compute average loss/accuracy
avg_cost += c / total_batch
avg_acc += a / total_batch
train_costs.append((epoch, avg_cost))
train_accs.append((epoch, avg_acc))
# Display logs per epoch step
if epoch % report_every == 0 and verbose:
print "Epoch:", '%04d' % (epoch+1), "Training cost=", \
"{:.9f}".format(avg_cost)
if epoch % eval_every == 0:
val_x, val_y = mnist.validation.images, mnist.validation.labels
feed_dict = {x : val_x, y : val_y, tf_model.keep_prob: 1.0,
tf_model.is_training: False}
c_eval, a_eval = session.run([loss, accuracy], feed_dict=feed_dict)
if verbose:
print "Epoch:", '%04d' % (epoch+1), "Validation acc=", \
"{:.9f}".format(a_eval)
if c_eval >= prev_c_eval and stop_early:
print "Validation loss stopped improving, stopping training early after %d epochs!" % (epoch + 1)
break
prev_c_eval = c_eval
val_costs.append((epoch, c_eval))
val_accs.append((epoch, a_eval))
print "Optimization Finished!"
return train_costs, train_accs, val_costs, val_accs
# Helper functions to plot training progress.
def my_plot(list_of_tuples):
Take a list of (epoch, value) and split these into lists of
epoch-only and value-only. Pass these to plot to make sure we
line up the values at the correct time-steps.
plt.plot(*zip(*list_of_tuples))
def plot_multi(values_lst, labels_lst, y_label, x_label='epoch'):
# Plot multiple curves.
assert len(values_lst) == len(labels_lst)
plt.subplot(2, 1, 2)
for v in values_lst:
my_plot(v)
plt.legend(labels_lst, loc='upper left')
plt.xlabel(x_label)
plt.ylabel(y_label)
plt.show()
Explanation: Lets also bring in the training and plotting routines we developed in Prac 2:
End of explanation
def _convolutional_layer(inputs, filter_size, output_depth):
Build a convolutional layer with `output_depth` square
filters, each of size `filter_size` x `filter_size`.
input_features = inputs.shape[3]
weights = tf.get_variable(
"conv_weights",
[filter_size, filter_size, input_features, output_depth],
dtype=tf.float32,
initializer=tf.truncated_normal_initializer(stddev=0.1))
## IMPLEMENT-ME CANDIDATE
conv = tf.nn.conv2d(inputs, weights, strides=[1, 1, 1, 1], padding='SAME')
return conv
def _dense_linear_layer(inputs, layer_name, input_size, output_size, weights_initializer):
Builds a layer that takes a batch of inputs of size `input_size` and returns
a batch of outputs of size `output_size`.
Args:
inputs: A `Tensor` of shape [batch_size, input_size].
layer_name: A string representing the name of the layer.
input_size: The size of the inputs
output_size: The size of the outputs
Returns:
out, weights: tuple of layer outputs and weights.
# Name scopes allow us to logically group together related variables.
# Setting reuse=False avoids accidental reuse of variables between different runs.
with tf.variable_scope(layer_name, reuse=False):
# Create the weights for the layer
layer_weights = tf.get_variable("weights",
shape=[input_size, output_size],
dtype=tf.float32,
initializer=weights_initializer)
# Create the biases for the layer
layer_bias = tf.get_variable("biases",
shape=[output_size],
dtype=tf.float32,
initializer=tf.constant_initializer(0.1))
outputs = tf.matmul(inputs, layer_weights) + layer_bias
return outputs
Explanation: Now define some helper functions to build a convolutional layer and a linear layer (this is mostly the same as the previous practical, but we use slightly different weight and bias initializations which seem to work better with ConvNets on MNIST). In terms of regularisation, we use dropout rather than the L2 regularisation from the previous practical.
Dropout
Dropout is a neural-network regularisation technique that is applied during model training. At each training step, a proportion (1-keep_prob) of neurons the network are "dropped out" (their inputs and outputs are set to 0, effectively ignoring their contribution) while the remaining keep_prob fraction are "let through" (Nit: they're actually rescaled by 1/keep_prob to ensure that the variance of the pre-activations at the next layer remains unchanged). This can be interpreted as there being actually $2^n$ different network architectures (where n is the number of neurons) while only one is being trained at each training step. At test time, we use the full network, where each neuron's contribution is weighted by keep_prob. This is effectively the average of all the network possibilities and therefore dropout can also be thought of as an ensemble technique.
In our ConvNet architecture, the majority of neurons occur in the fully connected layer between the convolutional layers and the output. It is therefore this fully connected layer that we are most concerned about overfitting and this is where we apply dropout.
End of explanation
class ConvNetClassifier(BaseSoftmaxClassifier):
def __init__(self,
input_size, # The size of the input
output_size, # The size of the output
filter_sizes=[], # The number of filters to use per convolutional layer
output_depths=[], # The number of features to output per convolutional layer
hidden_linear_size=512, # The size of the hidden linear layer
use_batch_norm=False, # Flag indicating whether or not to use batch normalisation
linear_weights_initializer=tf.truncated_normal_initializer(stddev=0.1)):
assert len(filter_sizes) == len(output_depths)
self.filter_sizes = filter_sizes
self.output_depths = output_depths
self.linear_weights_initializer = linear_weights_initializer
self.use_batch_norm = use_batch_norm
self.hidden_linear_size = hidden_linear_size
super(ConvNetClassifier, self).__init__(input_size, output_size)
def build_model(self):
# Architecture: INPUT - {CONV - RELU - POOL}*N - FC
# Reshape the input to [batch_size, width, height, input_depth]
conv_input = tf.reshape(self.x, [-1, 28, 28, 1])
prev_inputs = conv_input
# Create the CONV-RELU-POOL layers:
for layer_number, (layer_filter_size, layer_features) in enumerate(
zip(self.filter_sizes, self.output_depths)):
with tf.variable_scope("layer_{}".format(layer_number), reuse=False):
# Create the convolution:
conv = _convolutional_layer(prev_inputs, layer_filter_size, layer_features)
# Apply batch normalisation, if required
if self.use_batch_norm:
conv = tf.contrib.layers.batch_norm(conv, center=True, scale=True,
is_training=self.is_training)
# Apply the RELU activation with a bias
bias = tf.get_variable("bias", [layer_features], dtype=tf.float32, initializer=tf.constant_initializer(0.1))
relu = tf.nn.relu(conv + bias)
# Apply max-pooling using patch-sizes of 2x2
pool = tf.nn.max_pool(relu, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# QUESTION: What is the shape of the pool tensor?
# ANSWER: ...
prev_inputs = pool
# QUESTION: What is the shape of prev_inputs now?
# We need to flatten the last (non-batch) dimensions of the convolutional
# output in order to pass it to a fully-connected layer:
flattened = tf.contrib.layers.flatten(prev_inputs)
# Create the fully-connected (linear) layer that maps the flattened inputs
# to `hidden_linear_size` hidden outputs
flat_size = flattened.shape[1]
fully_connected = _dense_linear_layer(
flattened, "fully_connected", flat_size, self.hidden_linear_size, self.linear_weights_initializer)
# Apply batch normalisation, if required
if self.use_batch_norm:
fully_connected = tf.contrib.layers.batch_norm(
fully_connected, center=True, scale=True, is_training=self.is_training)
fc_relu = tf.nn.relu(fully_connected)
fc_drop = tf.nn.dropout(fc_relu, self.keep_prob)
# Now we map the `hidden_linear_size` outputs to the `output_size` logits, one for each possible digit class
logits = _dense_linear_layer(
fc_drop, "logits", self.hidden_linear_size, self.output_size, self.linear_weights_initializer)
self.logits = logits
self.predictions = tf.nn.softmax(self.logits)
self.loss = self.compute_loss()
Explanation: Now build the ConvNetClassifier, we make the number of convolutional layers and the filter sizes parameters so that you can easily experiment with different variations.
End of explanation
def build_train_eval_and_plot(build_params, train_params, verbose=True):
tf.reset_default_graph()
m = ConvNetClassifier(**build_params)
with tf.Session() as sess:
# Train model on the MNIST dataset.
train_losses, train_accs, val_losses, val_accs = train_tf_model(
m,
sess,
verbose=verbose,
**train_params)
# Now evaluate it on the test set:
accuracy_op = m.accuracy() # Get the symbolic accuracy operation
# Calculate the accuracy using the test images and labels.
accuracy = accuracy_op.eval({m.x: mnist.test.images,
m.y: mnist.test.labels,
m.keep_prob: 1.0,
m.is_training: False})
if verbose:
print "Accuracy on test set:", accuracy
# Plot losses and accuracies.
plot_multi([train_losses, val_losses], ['train', 'val'], 'loss', 'epoch')
plot_multi([train_accs, val_accs], ['train', 'val'], 'accuracy', 'epoch')
ret = {'train_losses': train_losses, 'train_accs' : train_accs,
'val_losses' : val_losses, 'val_accs' : val_accs,
'test_acc' : accuracy}
# Evaluate the final convolutional weights
conv_variables = [v for v in tf.trainable_variables() if "conv_weights" in v.name]
conv_weights = sess.run(conv_variables)
return m, ret, conv_weights
Explanation: Finally a function that wraps up the training and evaluation of the model (the same as prac 2)
End of explanation
%%time
# Create a ConvNet classifier with 2 CONV-RELU-POOL layers, with 7x7 filters,
# 32 and 64 output features and a hidden linear layer size of 512.
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5],
'output_depths': [4],
'hidden_linear_size': 128,
'use_batch_norm': False
}
training_params = {
'keep_prob': 0.5,
'num_epochs': 5,
'batch_size': 50,
'stop_early': False,
}
trained_model, training_results, conv_weights = build_train_eval_and_plot(
model_params,
training_params,
verbose=True
)
Explanation: Now train and evaluate the ConvNet model on MNIST.
NOTE: Hopefully you answered the question in the first section about the tradeoffs a ConvNet makes with "extra computation"! Unfortuntaly the VMs we're using are pretty low-powered, so we will train a very small ConvNet just to check that it works. Once you've got this small ConvNet working, chat to a tutor to get access to a machine with a GPU and run with the following configuration to get to the promised 99%+ accuracy on MNIST!
```
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5, 5],
'output_depths': [32, 64],
'hidden_linear_size': 1024,
'use_batch_norm': False
}
training_params = {
'keep_prob': 0.5,
'num_epochs': 20,
'batch_size': 50,
'stop_early': False,
}
```
End of explanation
weights = conv_weights[0]
_, _, _, out_depth = weights.shape
grid_size = int(out_depth**0.5)
fig = plt.figure()
i = 1
for r in range(grid_size):
for c in range(grid_size):
ax = fig.add_subplot(grid_size, grid_size, i)
ax.imshow(weights[:, :, 0, r*grid_size+c], cmap="Greys")
i += 1
plt.show()
Explanation: The ConvNet classifier takes quite a long time to train, but gives a very respectable test accuracy of over 99%!
What has the network learned?
Remember that a filter in a convolutional layer is used to multiply blocks in the input volume. Let's plot the weights of the first layer of the trained model. Darker pixels indicate that the particular filter reacts more strongly to those regions of the input blocks. Notice how each filter has learned to react differently to different patterns in the input. It's tricky to see in our tiny filters, but those in lower layers of ConvNets, particularly when applied to natural images, often function as simple Gabor filters or edge detectors, while filters in higher layers often react to more abstract shapes and concepts.
End of explanation
%%time
# Create a ConvNet classifier with 2 CONV-RELU-POOL layers, with filter sizes of
# 5 and 5 and 32 and 64 output features.
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5, 5],
'output_depths': [32, 64],
'hidden_linear_size': 1024,
'use_batch_norm': False
'linear_weights_initializer': tf.random_normal_initializer()
}
training_params = {
'keep_prob': 0.5,
'num_epochs': 5,
'batch_size': 50,
'stop_early': False,
}
trained_model, training_results = build_train_eval_and_plot(
model_params,
training_params,
verbose=True
)
Explanation: Aside: The Effect of Random Initialization - RUN THIS ON A GPU INSTANCE ONLY!
Initialization of model parameters matters! Here is a ConvNet with different, but seemingly sensible, initialization of the weights in the linear layer. Running this gives significantly worse results. Judging by the accuracy plot, it's possible that training this model long enough will get it to a simliar level as before, but it will take much longer. This shows that initialization of model parameters is a an important consideration, especially as models become more complex. In practice, there are a number of different initialization schemes to consider. In particular, Xavier tends to work well with ConvNets and is worth considering. We won't go into any details in this practical though.
End of explanation
%%time
## UNFORTUNATELY THIS WILL ALSO NOT WORK ON THE VM's, YOU'LL NEED TO GET A GPU INSTANCE TO RUN THIS!
# Create a ConvNet classifier with 2 CONV-RELU-POOL layers, with filter sizes of
# 5 and 5 and 32 and 64 output features.
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5, 5],
'output_depths': [32, 64],
'use_batch_norm': True,
}
training_params = {
'keep_prob': 1.0, # Switch off dropout
'num_epochs': 15,
'batch_size': 50,
'stop_early': False,
}
trained_model, training_results = build_train_eval_and_plot(
model_params,
training_params,
verbose=True
)
# QUESTION: Try experimenting with different archictures and hyperparameters and
# see how well you can classify MNIST digits!
Explanation: Batch Normalisation
Batch normalisation (batch norm) is a more recent (2015) and arguably more powerful normalisation technique than dropout. It is based on the observation that machine learning models often perform better and train faster when their inputs are normalised to have 0 mean and unit variance. In multi-layered deep neural networks, the output of one layer becomes the input to the next. The insight behind batch norm is that each of these layer inputs can also be normalised. Batch norm has been shown to have numerous benefits including:
* Networks tend to train faster
* Allows higher learning rates to be used (further improving training speed).
* Reduced sensitivity to weight initialisation.
* Makes certain activation functions feasible in deep networks (When inputs have very large (absolute) expected values, certain activation functions become saturated (For example, the output of sigmoid is always to 1 for large inputs). Relu activations can also "die out" when the expected value of the input is a large negative value (why?). This results in wasted computation as these neurons become uninformative. Normalising the inputs to have 0 mean keeps these activation functions in the "sensible" parts of their domains.)
How does it work?
To normalise some inputs X, ideally we would like to set
$\hat X = \frac{X - E[X]}{\sqrt{VAR[X]}}$
but this requires knowledge of the population mean and variance statistics, which we don't know, at least during training. We therefore use the sample mean and sample variance of each batch encountered during training as unbiased estimates of these statistics. During testing, we use statistics gathered throughout training as better estimates of the population statistics. In addition to this, we would like the model to have some flexibility over the extent to which batch norm is applied, and this flexibility should be learned! In order to do this, we introduce two new trainable parameters, $\gamma$ and $\beta$ for each layer that batch norm is applied to. Suppose we have a batch of inputs to a layer, $B={x_1,...,x_m}$, we normalise these as follows:
$\mu_B = \frac{1}{m} \sum_{i=1}^{m} x_i$ (Batch mean)
${\sigma_B}^2 = \frac{1}{m} \sum_{i=1}^{m} (x_i - \mu_B)^2$ (Batch variance)
$\hat x_i= \frac{x_i - \mu_B}{\sqrt{{\sigma_B}^2}}$ (Normalised)
$y_i = \gamma \hat x_i + \beta$ (Scale and shift)
At test time, we normalise using the mean and variance computed over the entire training set:
$E[x] = E_B[\mu_B]$
$VAR[x] = \frac{m}{m-1}E_B[{\sigma_B}^2]$
$\hat x = \frac{x - E[x]}{\sqrt{VAR[x]}}$
$y = \gamma \hat x + \beta$
Implementation Details
Tracking the mean and variance over the training set can become a little fiddly. Many implementations also use a moving average of the batch mean and variance as estimates of the population mean and variance for use during testing. Luckily, TensorFlow provides batch norm out of the box in the form of the tf.contrib.layers.batch_norm function.
Since the behaviour of batch norm changes during training and testing, we need to pass a placeholder input to the function that indicates which phase we are in. Furthermore, the batch norm function uses variable updates to track the moving average mean and variance. These values are not used during training and so TensorFlow's graph execution logic will not naturally run these updates when you run a training step. In order to get around this, the batch_norm function adds these update ops to a graph collection that we can access in our training function. The following code, which you will see in the train_tf_model function retrieves these ops and then adds a control dependency to the optimiser step. This effectively tells TensorFlow that the update_ops must be run before the optimizer_step can be run, ensuring that the estimates are updated whenever we do a training step.
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer_step = optimizer_fn.minimize(loss)
Further choices to consider when using batch norm are where to apply it (some apply it immediately before each activation function, some after the activation function), whether to apply it to all layers and whether or not to share the gamma and beta parameters over all layers or have separate values for each layer. .
Have a look at the ConvNetClassifer class above to see what choices were made, try changing these and see what results you get! (See the TensorFlow documentation for a list of even more parameters you can experiment with)
Now, finally, let's switch batch norm on and see how our ConvNetClassifier performs. (Note: we shouldn't expect it to necessarily perform better than dropout as we are already close to the limits of how well we can classify MNIST with our relatively small ConvNet!)
End of explanation |
15,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tidying Data
Tidying of data is required for many reasons including these
Step1: Working with Missing Data
Data is "missing" in pandas when it has a value of NaN (also seen as np.nan - the form from NumPy). The NaN value represents that in a particular Series that there is not a value specified for the particular index level.
In pandas, there are a number of reasons why a value can be NaN
Step2: This DataFrame object exhibits the following characteristics that will support most of the examples that follows
Step3: Selecting out or dropping missing data
Step4: When applied to a DataFrame object, dropna() will drop all rows froms a DataFrame object that have atleast one NaN value. If you want to drop only rows where all values are NaN, you can use the how="all" parameter.
Step5: Note that the .dropna() method returns a copy of the DataFrame object, and the data is dropped from that copy. If you want to drop the data in the actual DataFrame, use the inplace=True parameter.
How pandas handles NaN values in mathematical operations
The NaN values are handled differently in pandas than in NumPy. NumPy functions when encountering a NaN value, will return NaN. pandas functions will typically ignore the NaN values and continue processing the function as though the values were not part of the Series object.
More specifically the way that pandas handles NaN values is as follows
Step6: Filling in missing data
If you prefer to replace the NaN values with a specific value, instead of having them propagated or flat out ignored, you can use the .fillna() method. The following code fills the NaN values with 0
Step7: It is also possible to limit the number of times that the data will be filled using the limit parameter. Each time the NaN values are identified, pandas will fill the NaN values with the previous value up to the limit times in each group of NaN values.
Step8: Forward and Backward Filling of Missing Values
Gaps in data can be filled by propagating the non-NaN values forward or backward along a Series. To demonstrate this, the following example will "fill forward" the c4 column of DataFrame
Step9: You can also use the convenient functions pd.ffill() or pd.bfill()
Filling Using index labels
Step10: Interpolation of missing values
Both DataFrame and Series have an .interpolate() method that will, by default, perform a linear interpolation of missing values
Step11: The value of interpolation is calculated by taking the first value before and after any sequence of NaN values and then incrementally adding that value from the start and substituting NaN values. In this case, 2.0 and 1.0 are the surrounding values resulting in (2.0-1.0)/(5-1)=0.25 which is then added incrementally through all the NaN values.
The interpolation method also has the ability to specify a specific method of interpolation, one of the common methods is to use time-based interpolation.
Step12: The important thing to note is that the series is missing an entry for 2014-03-01. If we were expecting to interpolate daily values, there would be two values calculated one for 2014-02-01 and another for 2014-03-01 resulting in one more value in the numerator of the interpolation.
Step13: Interpolation can also be specified to calculate values relative to the index values when using numeric index labels.
Step14: Now the value calculated for NaN is interpolated using relative positioning based upon the labels in the index. The NaN value has a label of 1, which is one tenth of the way between 0 and 10 so the interpolated value will be
0 + (100-0)/10 or 10.
Handling Duplicate Data
Often it is considered best to erron the side of having duplicates instead of missing data, especially if the data is considered to be idempotent. However duplicate data can increase the size of the dataset and if it is not idempotent, then it would not be appropriate to process the duplicates.
Step15: The default operation is to keep the first row of the duplicates. If you want to keep the last row of duplicates, you can use the take_last=True parameter.
Step16: Transforming Data
Transformation is required for following reasons
Step17: Replacing Values
Step18: If using .replace() on a DataFrame, it is possible to specify different replacement values for each column. This is performed by passing a Python dictionary to the .replace() method, where the keys of the dictionary represent the names of the columns where replacement is to occur and the values of the dictionary are values that you want to replace. The second parameter to the method is the value that will be replaced where any matches are found.
Step19: Replacing specific values in each of the columns is very convenient, as it provides a shorthand for what otherwise would require coding a loop through all the columns.
Step20: Applying Functions to Transform Data
pandas provides the ability to apply functions to individual items, entire columns, entire rows providing incredible flexibility in transformation.
Functions can be applied using the conveniently named .apply() method, which given a Python function, will iteratively call the function passing in each value from a Series, or each Series representing a DataFrame column, or a list of values representing each row in a DataFrame.
Step21: A common practice is to take result of an apply operation and add it as a new column of the DataFrame. This is convenient as you can add into the DataFrame the result of one or more successive calculations.
Step22: Important point to note is that pandas DataFrame is not a spreadsheet where cells are assigned formulas and can be recalculated when cells that are referenced by the formula change. If you desire this to happen, you will need to execute the formulas whenever the dependent data changes. On the flip side, this is more efficient than with spreadsheets as every little change does not cause a cascade of operations to occur.
The .apply() method will always apply to the provided function to all of the items or rows or columns. If you want to apply the function to a subset of these, then first perform a Boolean selection to filter all the items you do not want to process.
Step23: The .apply() method was always passed an entire row or column. If you desire to apply a function to every individual item in the DataFrame one by one then .applymap() is the method to use. | Python Code:
# import pandas, numpy and datetime
import numpy as np
import pandas as pd
import datetime
# set some pandas options for controlling output
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns',10)
pd.set_option('display.max_rows',10)
Explanation: Tidying Data
Tidying of data is required for many reasons including these:
The names of the variables are different from what you require
Missing data
Values are not in units that you require
Period of sampling of records is not what you need
Variables are categorical and you neeed quantitative values
There is noise in the data
Information is of an incorrect type
Data is organized around incorrect axes
Data is at the wrong level of normalization
Data is duplicated
Moving away from a list of problems with data that needs to be addressed, there are several characterisitics of data that can be considered good, tidy and ready for analysis which are as follows:
Each variable is in one column
Each observation of the variable is in a different row
There should be one table for each kind of variable
If multiple tables they should be relatable
Qualitative and categorical variables have mappings to values useful for analysis
Setting up notebook
End of explanation
# create a DataFrame with 5 rows and 3 columns
df = pd.DataFrame(np.arange(0,15).reshape(5,3),index=['a','b','c','d','e'], columns=['col1','col2','col3'])
df
# add some columns and rows to the DataFrame
# column c4 with NaN values
df['c4'] = np.nan
# row 'f' with 15 through 18
df.loc['f'] = np.arange(15,19)
# row 'g' with all NaN
df.loc['g'] = np.nan
# column c5 with NaN's
df['c5'] = np.nan
# change value in col 'c4' row 'a'
df['c4']['a']=20
df
Explanation: Working with Missing Data
Data is "missing" in pandas when it has a value of NaN (also seen as np.nan - the form from NumPy). The NaN value represents that in a particular Series that there is not a value specified for the particular index level.
In pandas, there are a number of reasons why a value can be NaN:
Join of two sets of data does not have matched values
Data that you retrieved from an external source is incomplete
NaN is not known at a give point in time and will be filled in later
There is a data collection error retrieving a value, but the event must still be recorded in the index
Reindexing of data has resulted in an index that does not have a value
Shape of a data has changed, there are new additional rows or columns
End of explanation
# which items are NaN?
df.isnull()
# count the number of NaN values in each column
df.isnull().sum()
# total count of NaN values
df.isnull().sum().sum()
# number of non-NaN values in each column
df.count()
# and can used for counting NaN values
(len(df)-df.count()).sum()
# which items are not null
df.notnull()
Explanation: This DataFrame object exhibits the following characteristics that will support most of the examples that follows:
One row consisting only of NaN values
One column is consisting only of NaN values
Several rows and columns consisting of both numeric and NaN values
Determining NaN values in Series and DataFrame objects
End of explanation
# select the non-NaN items in column c4
df.c4[df.c4.notnull()]
# .dropna will also return non NaN values
# this gets all non NaN items in column c4
df.c4.dropna()
# dropna returns as copy with the values dropped
# the source DataFrame / column is not changed
df.c4
Explanation: Selecting out or dropping missing data
End of explanation
# on a dataframe this will drop entire rows
# where there is atleast one NaN
# in this case, all the rows
df.dropna()
# using how='all', only rows that have all values
# as NaN will be dropped
df.dropna(how='all')
# flip to drop columns instead of rows
df.dropna(how='all',axis=1) # c5 column will be dropped
# make a copy of df
df2 = df.copy()
# replace two NaN cells with values
df2.loc['g'].col1=0
df2.loc['g'].col3=0
df2
# now drop columns with any NaN values
df2.dropna(how='any',axis=1)
# only drop columns with at least 5 NaN values
df.dropna(thresh=5,axis=1)
Explanation: When applied to a DataFrame object, dropna() will drop all rows froms a DataFrame object that have atleast one NaN value. If you want to drop only rows where all values are NaN, you can use the how="all" parameter.
End of explanation
# create a NumPy array with one NaN value
a = np.array([1,2,np.nan,3])
# create a Series from the array
s = pd.Series(a)
# the mean of each is different
a.mean(), s.mean()
# demonstrate sum,mean and cumsum handling of NaN
# get one column
s = df.c4
s.sum() # NaN values treated a 0
s.mean() # NaN treated as 0
# as 0 in the cumsum but NaN values preserved in result series
s.cumsum()
# in arithmetic, a NaN value will result in NaN
df.c4 + 1
Explanation: Note that the .dropna() method returns a copy of the DataFrame object, and the data is dropped from that copy. If you want to drop the data in the actual DataFrame, use the inplace=True parameter.
How pandas handles NaN values in mathematical operations
The NaN values are handled differently in pandas than in NumPy. NumPy functions when encountering a NaN value, will return NaN. pandas functions will typically ignore the NaN values and continue processing the function as though the values were not part of the Series object.
More specifically the way that pandas handles NaN values is as follows:
* Summing of data treats NaN as 0
* If all values are NaN, the result is NaN
* Methods like .cumsum() and .cumprod() ignore NaN values but preserve them in resulting arrays
End of explanation
# return a new DataFrame with NaN values filled with 0
filled = df.fillna(0)
filled
# having replaced NaN with 0 can make
# operations such as mean have different results
filled.mean()
Explanation: Filling in missing data
If you prefer to replace the NaN values with a specific value, instead of having them propagated or flat out ignored, you can use the .fillna() method. The following code fills the NaN values with 0:
End of explanation
# only fills the first two NaN values in each row with 0
df.fillna(0,limit=2)
Explanation: It is also possible to limit the number of times that the data will be filled using the limit parameter. Each time the NaN values are identified, pandas will fill the NaN values with the previous value up to the limit times in each group of NaN values.
End of explanation
# extract the c4 column and fill NaNs forward
df.c4.fillna(method="ffill")
# perform a backword fill
df.c4.fillna(method="bfill")
Explanation: Forward and Backward Filling of Missing Values
Gaps in data can be filled by propagating the non-NaN values forward or backward along a Series. To demonstrate this, the following example will "fill forward" the c4 column of DataFrame:
End of explanation
# create a new series of values to be
# used to fill NaN values where the index label matches
fill_values = pd.Series([100,101,102], index=['a','e','g'])
fill_values
# using c4, fill using fill_values
# a, e and g will be filled with matching values
df.c4.fillna(fill_values)
# fill NaN values in each column with the
# mean of the values in that column
df.fillna(df.mean())
Explanation: You can also use the convenient functions pd.ffill() or pd.bfill()
Filling Using index labels
End of explanation
# linear interpolate the NaN values from 1 through 2
s = pd.Series([1,np.nan,np.nan,np.nan,2])
s.interpolate()
Explanation: Interpolation of missing values
Both DataFrame and Series have an .interpolate() method that will, by default, perform a linear interpolation of missing values:
End of explanation
# create a time series, but missing one date in the Series
ts = pd.Series([1,np.nan,3],index=[datetime.datetime(2014,1,1),datetime.datetime(2014,2,1),datetime.datetime(2014,4,1)])
ts
ts.interpolate()
Explanation: The value of interpolation is calculated by taking the first value before and after any sequence of NaN values and then incrementally adding that value from the start and substituting NaN values. In this case, 2.0 and 1.0 are the surrounding values resulting in (2.0-1.0)/(5-1)=0.25 which is then added incrementally through all the NaN values.
The interpolation method also has the ability to specify a specific method of interpolation, one of the common methods is to use time-based interpolation.
End of explanation
# this accounts for the fact that we dont have
# an entry for 2014-03-01
ts.interpolate(method="time")
Explanation: The important thing to note is that the series is missing an entry for 2014-03-01. If we were expecting to interpolate daily values, there would be two values calculated one for 2014-02-01 and another for 2014-03-01 resulting in one more value in the numerator of the interpolation.
End of explanation
# a series to demonstrate index label based interpolation
s = pd.Series([0,np.nan,100], index=[0,1,10])
s
# linear interpolate
s.interpolate()
# interpolate based upon the values in the index
s.interpolate(method="values")
Explanation: Interpolation can also be specified to calculate values relative to the index values when using numeric index labels.
End of explanation
# a DataFrame with lots of duplicate data
data = pd.DataFrame({'a':['x'] * 3 + ['y'] * 4,'b':[1,1,2,3,3,4,4]})
data
# reports which rows are duplicates based upon
# if the data in all columns was seen before
data.duplicated()
# drop duplicate rows retaining first row of the duplicates
data.drop_duplicates()
Explanation: Now the value calculated for NaN is interpolated using relative positioning based upon the labels in the index. The NaN value has a label of 1, which is one tenth of the way between 0 and 10 so the interpolated value will be
0 + (100-0)/10 or 10.
Handling Duplicate Data
Often it is considered best to erron the side of having duplicates instead of missing data, especially if the data is considered to be idempotent. However duplicate data can increase the size of the dataset and if it is not idempotent, then it would not be appropriate to process the duplicates.
End of explanation
# drop duplicate rows only keeping the first instance of any data
data.drop_duplicates(keep="last")
# add a column c with values 0..6
# this makes .duplicated() report no duplicate rows
data['c'] = range(7)
data.duplicated()
# but if we specify duplicates to be dropped in columns a & b
# they will be dropped
data.drop_duplicates(['a','b'])
Explanation: The default operation is to keep the first row of the duplicates. If you want to keep the last row of duplicates, you can use the take_last=True parameter.
End of explanation
# create two Series objects to demonstrate mapping
x = pd.Series({"one":1,"two":2,"three":3})
y = pd.Series({1:"a",2:"b",3:"c"})
x
y
# map values in x to values in y
x.map(y)
# three in x will not align / map to a value in y
x = pd.Series({"one":1,"two":2,"three":3})
y = pd.Series({1:"a",2:"b"})
x.map(y)
Explanation: Transforming Data
Transformation is required for following reasons:
* Values are not in the correct units
* Values are qualitative and need to be converted to appropriate numeric values
* Extraneous data that either wastes memory and processing time or can affect results simply by being included
To address these situations we can take one or more of the following actions:
* Map values to other values using a table lookup process
* Explicitly replace certain values with other values
* Apply methods to transform the values based on an algorithm
* Simple remove extraneous columns and rows
Mapping
pandas provides a generic ability to map values using a lookup table using the .map() method. This method performs the mapping by matching the values of the outer Series with the index labels of the inner Series returning a new Series with the index labels of the outer Series but the values from the inner Series:
End of explanation
# create a Series to demonstrate replace
s = pd.Series([0.,1.,2.,3.,4.])
s
# replace all items with index label 2 with value 5
s.replace(2,5)
# replace all items with new values
s.replace([0,1,2,3,4],[4,3,2,1,0])
# replace using entries in a dictionary
s.replace({0:10,1:100})
Explanation: Replacing Values
End of explanation
# DataFrame with two columns
df = pd.DataFrame({'a':[0,1,2,3,4],'b':[5,6,7,8,9]})
df
# specify different replacement values for each column
df.replace({'a':1,'b':8}, 100)
Explanation: If using .replace() on a DataFrame, it is possible to specify different replacement values for each column. This is performed by passing a Python dictionary to the .replace() method, where the keys of the dictionary represent the names of the columns where replacement is to occur and the values of the dictionary are values that you want to replace. The second parameter to the method is the value that will be replaced where any matches are found.
End of explanation
# demonstrate replacement with pad method
# set first item to 10, to have a distinct replacement value
s[0] = 10
s
# replace items with index label 1,2,3 using fill from the
# most recent value prior to the specified labels (10)
s.replace([1,2,3],method='pad')
Explanation: Replacing specific values in each of the columns is very convenient, as it provides a shorthand for what otherwise would require coding a loop through all the columns.
End of explanation
# demonstrate applying a function to every item of a series
s = pd.Series(np.arange(0,5))
s.apply(lambda x: x * 2)
# demonstrate applying a sum on each column
df = pd.DataFrame(np.arange(12).reshape(4,3),columns=['a','b','c'])
df
# calculate cumulative sum of items in each column
df.apply(lambda col: col.sum())
# calculate the sum of items in each row
df.apply(lambda row: row.sum(),axis=1)
Explanation: Applying Functions to Transform Data
pandas provides the ability to apply functions to individual items, entire columns, entire rows providing incredible flexibility in transformation.
Functions can be applied using the conveniently named .apply() method, which given a Python function, will iteratively call the function passing in each value from a Series, or each Series representing a DataFrame column, or a list of values representing each row in a DataFrame.
End of explanation
# create a new column 'interim' with a * b
df['interim'] = df.apply(lambda r: r.a * r.b,axis=1)
df
# and now a 'result' column with 'interim' + 'c'
df['result'] = df.apply(lambda r: r.interim + r.c, axis=1)
df
# replace column a with the sum of columns a,b and c
df.a = df.a + df.b + df.c
df
Explanation: A common practice is to take result of an apply operation and add it as a new column of the DataFrame. This is convenient as you can add into the DataFrame the result of one or more successive calculations.
End of explanation
# create a 3 X 5 dataframe
df = pd.DataFrame(np.arange(0,15).reshape(3,5))
df.loc[1,2]= np.nan
df
# demonstrate applying a function to only rows having
# a count of 0 NaN values
df.dropna().apply(lambda x:x.sum(), axis=1)
Explanation: Important point to note is that pandas DataFrame is not a spreadsheet where cells are assigned formulas and can be recalculated when cells that are referenced by the formula change. If you desire this to happen, you will need to execute the formulas whenever the dependent data changes. On the flip side, this is more efficient than with spreadsheets as every little change does not cause a cascade of operations to occur.
The .apply() method will always apply to the provided function to all of the items or rows or columns. If you want to apply the function to a subset of these, then first perform a Boolean selection to filter all the items you do not want to process.
End of explanation
# use applymap to format all items of the DataFrame
df.applymap(lambda x: '%.2f' % x)
Explanation: The .apply() method was always passed an entire row or column. If you desire to apply a function to every individual item in the DataFrame one by one then .applymap() is the method to use.
End of explanation |
15,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Face Recognition for the Happy House
Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace.
Face recognition problems commonly fall into two categories
Step1: 0 - Naive Face Verification
In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width
Step3: Expected Output
<table>
<center>
Total Params
Step4: Expected Output
Step5: Here're some examples of distances between the encodings between three individuals
Step7: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
Exercise
Step8: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture
Step9: Expected Output
Step11: Expected Output
Step12: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes. | Python Code:
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
Explanation: Face Recognition for the Happy House
Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace.
Face recognition problems commonly fall into two categories:
Face Verification - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
Face Recognition - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
In this assignment, you will:
- Implement the triplet loss function
- Use a pretrained model to map face images into 128-dimensional encodings
- Use these encodings to perform face verification and face recognition
In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.
Let's load the required packages.
End of explanation
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
Explanation: 0 - Naive Face Verification
In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width:380px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 1 </u></center></caption>
Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.
You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.
1 - Encoding face images into a 128-dimensional vector
1.1 - Using an ConvNet to compute encodings
The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook).
The key things you need to know are:
This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$
It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images.
End of explanation
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
Explanation: Expected Output
<table>
<center>
Total Params: 3743280
</center>
</table>
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:
<img src="images/distance_kiank.png" style="width:680px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
So, an encoding is a good one if:
- The encodings of two images of the same person are quite similar to each other
- The encodings of two images of different persons are very different
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
<img src="images/triplet_comparison.png" style="width:280px;height:150px;">
<br>
<caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>
1.2 - The Triplet Loss
For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.
<img src="images/f_x.png" style="width:380px;height:150px;">
<!--
We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).
!-->
Training will use triplets of images $(A, P, N)$:
A is an "Anchor" image--a picture of a person.
P is a "Positive" image--a picture of the same person as the Anchor image.
N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example.
You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$
You would thus like to minimize the following "triplet cost":
$$\mathcal{J} = \sum^{m}{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}\text{(2)} + \alpha \large ] \small+ \tag{3}$$
Here, we are using the notation "$[z]_+$" to denote $max(z,0)$.
Notes:
- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
- The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it.
- $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$.
Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here.
Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps:
1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
3. Compute the full formula by taking the max with zero and summing over the training examples:
$$\mathcal{J} = \sum^{m}{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small+ \tag{3}$$
Useful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.maximum().
For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
End of explanation
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
Explanation: Expected Output:
<table>
<tr>
<td>
**loss**
</td>
<td>
528.143
</td>
</tr>
</table>
2 - Loading the trained model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
End of explanation
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
Explanation: Here're some examples of distances between the encodings between three individuals:
<img src="images/distance_matrix.png" style="width:380px;height:200px;">
<br>
<caption><center> <u> <font color='purple'> Figure 4:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
Let's now use this model to perform face verification and face recognition!
3 - Applying the model
Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment.
However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food.
So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be.
3.1 - Face Verification
Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
End of explanation
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.sum(np.linalg.norm(encoding - database[identity]))
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
Explanation: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps:
1. Compute the encoding of the image from image_path
2. Compute the distance about this encoding and the encoding of the identity image stored in the database
3. Open the door if the distance is less than 0.7, else do not open.
As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
End of explanation
verify("images/camera_0.jpg", "younes", database, FRmodel)
Explanation: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
<img src="images/camera_0.jpg" style="width:100px;height:100px;">
End of explanation
verify("images/camera_2.jpg", "kian", database, FRmodel)
Explanation: Expected Output:
<table>
<tr>
<td>
**It's younes, welcome home!**
</td>
<td>
(0.65939283, True)
</td>
</tr>
</table>
Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
<img src="images/camera_2.jpg" style="width:100px;height:100px;">
End of explanation
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.sum(np.linalg.norm(encoding - db_enc))
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
Explanation: Expected Output:
<table>
<tr>
<td>
**It's not kian, please go away**
</td>
<td>
(0.86224014, False)
</td>
</tr>
</table>
3.2 - Face Recognition
Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in!
To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input.
Exercise: Implement who_is_it(). You will have to go through the following steps:
1. Compute the target encoding of the image from image_path
2. Find the encoding from the database that has smallest distance with the target encoding.
- Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
- Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items().
- Compute L2 distance between the target "encoding" and the current "encoding" from the database.
- If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
End of explanation
who_is_it("images/camera_0.jpg", database, FRmodel)
Explanation: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
End of explanation |
15,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Committor Estimate on the Muller-Brown Potential
Step1: Load Data and set Hyperparameters
We first load in the pre-sampled data. The data consists of 1000 short trajectories, each with 5 datapoints. The precise sampling procedure is described in "Galerkin Approximation of Dynamical Quantities using Trajectory Data" by Thiede et al. Note that this is a smaller dataset than in the paper. We use a smallar dataset to ensure the diffusion map basis construction runs in a reasonably short time.
Set Hyperparameters
Here we specify a few hyperparameters. Thes can be varied to study the behavior of the scheme in various limits by the user.
Step2: Load and format the data
Step3: We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the traj_edges array.
Finally, we load the reference, "true" committor for comparison.
Step4: Construct DGA Committor
We now use PyEDGAR to build an estimate for the forward committor.
Build Basis Set
We first build the basis set required for the DGA Calculation. In this demo, we will use the diffusion map basis.
Step5: Here, we construct the basis and guess functions, and convert them back into lists of trajectories. The domain is the set of all sets out side of $(A\cup B)^c$.
Step6: We plot the guess function and the first few basis functions.
Step7: The third basis function looks like noise from the perspective of the $x$ and $y$ coordinates. This is because it correlates most strongly with the harmonic degrees of freedom. Note that due to the boundary conditions, it is not precisely the dominant eigenvector of the harmonic degrees of freedom.
Step8: Build the committor function
We are ready to compute the committor function using DGA. This can be done by passing the guess function and the basis to the the Galerkin module.
Step9: Here, we plot how much the DGA estimate perturbs the Guess function
Step10: Compare against reference
To compare against the reference values, we will interpolate the reference onto the datapoints usingy scipy's interpolate package.
Step11: A comparison of our estimate with the True committor. While the estimate is good, we systematically underestimate the committor near (0, 0.5). | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pyedgar
from pyedgar.data_manipulation import tlist_to_flat, flat_to_tlist
%matplotlib inline
Explanation: Committor Estimate on the Muller-Brown Potential
End of explanation
ntraj = 1000
trajectory_length = 5
dim = 10
Explanation: Load Data and set Hyperparameters
We first load in the pre-sampled data. The data consists of 1000 short trajectories, each with 5 datapoints. The precise sampling procedure is described in "Galerkin Approximation of Dynamical Quantities using Trajectory Data" by Thiede et al. Note that this is a smaller dataset than in the paper. We use a smallar dataset to ensure the diffusion map basis construction runs in a reasonably short time.
Set Hyperparameters
Here we specify a few hyperparameters. Thes can be varied to study the behavior of the scheme in various limits by the user.
End of explanation
trajs = np.load('data/muller_brown_trajs.npy')[:ntraj, :trajectory_length, :dim] # Raw trajectory
stateA = np.load('data/muller_brown_stateA.npy')[:ntraj, :trajectory_length] # 1 if in state A, 0 otherwise
stateB = np.load('data/muller_brown_stateB.npy')[:ntraj, :trajectory_length] # 1 if in state B, 0 otherwise
print("Data shape: ", trajs.shape)
trajs = [traj_i for traj_i in trajs]
stateA = [A_i for A_i in stateA]
stateB = [B_i for B_i in stateB]
in_domain = [1. - B_i - A_i for (A_i, B_i) in zip(stateA, stateB)]
Explanation: Load and format the data
End of explanation
ref_comm = np.load('reference/reference_committor.npy')
ref_potential = np.load('reference/potential.npy')
xgrid = np.load('reference/xgrid.npy')
ygrid = np.load('reference/ygrid.npy')
# Plot the true committor.
fig, ax = plt.subplots(1)
HM = ax.pcolor(xgrid, ygrid, ref_comm, vmin=0, vmax=1)
ax.contour(xgrid, ygrid, ref_potential, levels=np.linspace(0, 10., 11), colors='k') # Contour lines every 1 k_B T
ax.set_aspect('equal')
cbar = plt.colorbar(HM, ax=ax)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('True Committor')
Explanation: We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the traj_edges array.
Finally, we load the reference, "true" committor for comparison.
End of explanation
diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d', epsilon='bgh_generous')
diff_atlas.fit(trajs)
Explanation: Construct DGA Committor
We now use PyEDGAR to build an estimate for the forward committor.
Build Basis Set
We first build the basis set required for the DGA Calculation. In this demo, we will use the diffusion map basis.
End of explanation
basis, evals = diff_atlas.make_dirichlet_basis(300, in_domain=in_domain, return_evals=True)
guess = diff_atlas.make_FK_soln(stateB, in_domain=in_domain)
flat_basis = np.vstack(basis)
flat_guess = np.hstack(guess)
Explanation: Here, we construct the basis and guess functions, and convert them back into lists of trajectories. The domain is the set of all sets out side of $(A\cup B)^c$.
End of explanation
# Flatten the basis, guess, and trajectories functions for easy plotting.
flattened_trajs = np.vstack(trajs)
flat_basis = np.vstack(basis)
flat_guess = np.hstack(guess)
fig, axes= plt.subplots(1, 5, figsize=(14,4.), sharex=True, sharey=True)
axes[0].scatter(flattened_trajs[:,0], flattened_trajs[:,1],
c=flat_guess, s=3)
axes[0].set_title('Guess')
axes[0].set_ylabel("y")
for i, ax in enumerate(axes[1:]):
vm = np.max(np.abs(flat_basis[:, i]))
ax.scatter(flattened_trajs[:,0], flattened_trajs[:,1],
c=flat_basis[:, i], s=3, cmap='coolwarm',
vmin=-1*vm, vmax=vm)
ax.set_title(r"$\phi_%d$" % (i+1))
for ax in axes:
ax.set_aspect('equal')
# ax.
axes[2].set_xlabel("x")
Explanation: We plot the guess function and the first few basis functions.
End of explanation
fig, (ax1) = plt.subplots(1, figsize=(3.5,3.5))
vm = np.max(np.abs(flat_basis[:,2]))
ax1.scatter(flattened_trajs[:,3], flattened_trajs[:,5],
c=flat_basis[:, 2], s=3, cmap='coolwarm',
vmin=-1*vm, vmax=vm)
ax1.set_aspect('equal')
ax1.set_title(r"$\phi_%d$" % 3)
ax1.set_xlabel("$z_2$")
ax1.set_ylabel("$z_4$")
Explanation: The third basis function looks like noise from the perspective of the $x$ and $y$ coordinates. This is because it correlates most strongly with the harmonic degrees of freedom. Note that due to the boundary conditions, it is not precisely the dominant eigenvector of the harmonic degrees of freedom.
End of explanation
g = pyedgar.galerkin.compute_committor(basis, guess, lag=1)
fig, (ax1) = plt.subplots(1, figsize=(5.5,3.5))
SC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel(), vmin=0., vmax=1., s=3)
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('Estimated Committor')
plt.colorbar(SC)
ax1.set_aspect('equal')
Explanation: Build the committor function
We are ready to compute the committor function using DGA. This can be done by passing the guess function and the basis to the the Galerkin module.
End of explanation
fig, (ax1) = plt.subplots(1, figsize=(4.4,3.5))
SC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel() - flat_guess,
vmin=-.5, vmax=.5, cmap='bwr', s=3)
ax1.set_aspect('equal')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('Estimate - Guess')
plt.colorbar(SC, ax=ax1)
Explanation: Here, we plot how much the DGA estimate perturbs the Guess function
End of explanation
import scipy.interpolate as spi
spline = spi.RectBivariateSpline(xgrid, ygrid, ref_comm.T)
ref_comm_on_data = np.array([spline.ev(c[0], c[1]) for c in flattened_trajs[:,:2]])
ref_comm_on_data[ref_comm_on_data < 0.] = 0.
ref_comm_on_data[ref_comm_on_data > 1.] = 1.
Explanation: Compare against reference
To compare against the reference values, we will interpolate the reference onto the datapoints usingy scipy's interpolate package.
End of explanation
fig, axes = plt.subplots(1, 3, figsize=(16,3.5), sharex=True, sharey=True)
(ax1, ax2, ax3) = axes
SC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=ref_comm_on_data, vmin=0., vmax=1., s=3)
plt.colorbar(SC, ax=ax1)
SC = ax2.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel(), vmin=0., vmax=1., s=3)
plt.colorbar(SC, ax=ax2)
SC = ax3.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel() -ref_comm_on_data,
vmin=-.5, vmax=.5, s=3, cmap='bwr')
plt.colorbar(SC, ax=ax3)
# ax1.set_aspect('equal')
ax2.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('True Committor')
ax2.set_title('DGA Estimate')
ax3.set_title('Estimate - True')
plt.tight_layout(pad=-1.)
for ax in axes:
ax.set_aspect('equal')
Explanation: A comparison of our estimate with the True committor. While the estimate is good, we systematically underestimate the committor near (0, 0.5).
End of explanation |
15,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combining arrays
Import the LArray library
Step1: The LArray library offers several methods and functions to combine arrays
Step2: See insert for more details and examples.
Append
Append one element to an axis of an array
Step3: The value being appended can have missing (or even extra) axes as long as common axes are compatible
Step4: See append for more details and examples.
Prepend
Prepend one element to an axis of an array
Step5: See prepend for more details and examples.
Extend
Extend an array along an axis with another array with that axis (but other labels)
Step6: See extend for more details and examples.
Stack
Stack several arrays together to create an entirely new dimension | Python Code:
from larray import *
# load the 'demography_eurostat' dataset
demography_eurostat = load_example_data('demography_eurostat')
# load 'gender' and 'time' axes
gender = demography_eurostat.gender
time = demography_eurostat.time
# load the 'population' array from the 'demography_eurostat' dataset
population = demography_eurostat.population
# show 'population' array
population
# load the 'population_benelux' array from the 'demography_eurostat' dataset
population_benelux = demography_eurostat.population_benelux
# show 'population_benelux' array
population_benelux
Explanation: Combining arrays
Import the LArray library:
End of explanation
other_countries = zeros((Axis('country=Luxembourg,Netherlands'), gender, time), dtype=int)
# insert new countries before 'France'
population_new_countries = population.insert(other_countries, before='France')
population_new_countries
# insert new countries after 'France'
population_new_countries = population.insert(other_countries, after='France')
population_new_countries
Explanation: The LArray library offers several methods and functions to combine arrays:
insert: inserts an array in another array along an axis
append: adds an array at the end of an axis.
prepend: adds an array at the beginning of an axis.
extend: extends an array along an axis.
stack: combines several arrays along a new axis.
Insert
End of explanation
# append data for 'Luxembourg'
population_new = population.append('country', population_benelux['Luxembourg'], 'Luxembourg')
population_new
Explanation: See insert for more details and examples.
Append
Append one element to an axis of an array:
End of explanation
population_lux = Array([-1, 1], gender)
population_lux
population_new = population.append('country', population_lux, 'Luxembourg')
population_new
Explanation: The value being appended can have missing (or even extra) axes as long as common axes are compatible:
End of explanation
# append data for 'Luxembourg'
population_new = population.prepend('country', population_benelux['Luxembourg'], 'Luxembourg')
population_new
Explanation: See append for more details and examples.
Prepend
Prepend one element to an axis of an array:
End of explanation
population_extended = population.extend('country', population_benelux[['Luxembourg', 'Netherlands']])
population_extended
Explanation: See prepend for more details and examples.
Extend
Extend an array along an axis with another array with that axis (but other labels)
End of explanation
# imagine you have loaded data for each country in different arrays
# (e.g. loaded from different Excel sheets)
population_be = population['Belgium']
population_fr = population['France']
population_de = population['Germany']
print(population_be)
print(population_fr)
print(population_de)
# create a new array with an extra axis 'country' by stacking the three arrays population_be/fr/de
population_stacked = stack({'Belgium': population_be, 'France': population_fr, 'Germany': population_de}, 'country')
population_stacked
Explanation: See extend for more details and examples.
Stack
Stack several arrays together to create an entirely new dimension
End of explanation |
15,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ML with TensorFlow Extended (TFX) -- Part 2
The puprpose of this tutorial is to show how to do end-to-end ML with TFX libraries on Google Cloud Platform. This tutorial covers
Step1: <img valign="middle" src="images/tfx.jpeg">
UCI Adult Dataset
Step2: 2. Data Preprocessing
For data preprocessing and transformation, we use TensorFlow Transform to perform the following
Step3: 2.2 Implement the Beam pipeline
Step4: 1.4 Run data tranformation pipeline
Step5: Check the outputs | Python Code:
import apache_beam as beam
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
print('TF version: {}'.format(tf.__version__))
print('TFT version: {}'.format(tft.__version__))
print('TFDV version: {}'.format(tfdv.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
PROJECT = 'cloud-training-demos' # Replace with your PROJECT
BUCKET = 'cloud-training-demos-ml' # Replace with your BUCKET
REGION = 'us-central1' # Choose an available region for Cloud MLE
import os
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
## ensure we're using python2 env
os.environ['CLOUDSDK_PYTHON'] = 'python2'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
Explanation: ML with TensorFlow Extended (TFX) -- Part 2
The puprpose of this tutorial is to show how to do end-to-end ML with TFX libraries on Google Cloud Platform. This tutorial covers:
1. Data analysis and schema generation with TF Data Validation.
2. Data preprocessing with TF Transform.
3. Model training with TF Estimator.
4. Model evaluation with TF Model Analysis.
This notebook has been tested in Jupyter on the Deep Learning VM.
Cloud environment
End of explanation
DATA_DIR='gs://cloud-samples-data/ml-engine/census/data'
import os
TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
!gsutil ls -l $TRAIN_DATA_FILE
!gsutil ls -l $EVAL_DATA_FILE
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
TARGET_FEATURE_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt'
Explanation: <img valign="middle" src="images/tfx.jpeg">
UCI Adult Dataset: https://archive.ics.uci.edu/ml/datasets/adult
Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset.
End of explanation
def make_preprocessing_fn(raw_schema):
def preprocessing_fn(input_features):
processed_features = {}
for feature in raw_schema.feature:
feature_name = feature.name
if feature_name in ['income_bracket']:
processed_features[feature_name] = input_features[feature_name]
elif feature_name in ['fnlwgt']:
# Scale weights to be less than 1.0
processed_features[feature_name + "_scaled"] = (
tf.cast(input_features[feature_name], tf.float32) /
tf.cast(tft.max(input_features[feature_name]), tf.float32))
elif feature.type == 1:
# Extract vocabulary and integerize categorical features.
processed_features[feature_name+"_integerized"] = (
tft.compute_and_apply_vocabulary(input_features[feature_name], vocab_filename=feature_name))
else:
# normalize numeric features.
processed_features[feature_name+"_scaled"] = tft.scale_to_z_score(input_features[feature_name])
# Bucketize age using quantiles.
quantiles = tft.quantiles(input_features["age"], num_buckets=5, epsilon=0.01)
processed_features["age_bucketized"] = tft.apply_buckets(
input_features["age"], bucket_boundaries=quantiles)
return processed_features
return preprocessing_fn
Explanation: 2. Data Preprocessing
For data preprocessing and transformation, we use TensorFlow Transform to perform the following:
1. Implement transformation logic in preprocess_fn
2. Analyze and transform training data.
4. Transform evaluation data.
5. Save transformed data, transform schema, and trasform logic.
2.1 Implement preprocess_fn
End of explanation
def run_pipeline(args):
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
import tensorflow_data_validation as tfdv
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **args)
raw_schema_location = args['raw_schema_location']
raw_train_data_location = args['raw_train_data_location']
raw_eval_data_location = args['raw_eval_data_location']
transformed_train_data_location = args['transformed_train_data_location']
transformed_eval_data_location = args['transformed_eval_data_location']
transform_artifact_location = args['transform_artifact_location']
temporary_dir = args['temporary_dir']
runner = args['runner']
print ("Raw schema location: {}".format(raw_schema_location))
print ("Raw train data location: {}".format(raw_train_data_location))
print ("Raw evaluation data location: {}".format(raw_eval_data_location))
print ("Transformed train data location: {}".format(transformed_train_data_location))
print ("Transformed evaluation data location: {}".format(transformed_eval_data_location))
print ("Transform artifact location: {}".format(transform_artifact_location))
print ("Temporary directory: {}".format(temporary_dir))
print ("Runner: {}".format(runner))
print ("")
# Load TFDV schema and create tft schema from it.
source_raw_schema = tfdv.load_schema_text(raw_schema_location)
raw_feature_spec = schema_utils.schema_as_feature_spec(source_raw_schema).feature_spec
raw_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.from_feature_spec(raw_feature_spec))
with beam.Pipeline(runner, options=pipeline_options) as pipeline:
with tft_beam.Context(temporary_dir):
converter = tft.coders.CsvCoder(column_names=HEADER,
schema=raw_metadata.schema)
###### analyze & transform trainining data ###############################
# Read raw training csv data.
step = 'Train'
raw_train_data = (
pipeline
| '{} - Read Raw Data'.format(step) >> beam.io.textio.ReadFromText(raw_train_data_location)
| '{} - Remove Empty Rows'.format(step) >> beam.Filter(lambda line: line)
| '{} - Decode CSV Data'.format(step) >> beam.Map(converter.decode)
)
# Create a train dataset from the data and schema.
raw_train_dataset = (raw_train_data, raw_metadata)
# Analyze and transform raw_train_dataset to produced transformed_train_dataset and transform_fn.
transformed_train_dataset, transform_fn = (
raw_train_dataset
| '{} - Analyze & Transform'.format(step) >> tft_beam.AnalyzeAndTransformDataset(
make_preprocessing_fn(source_raw_schema))
)
# Get data and schema separately from the transformed_train_dataset.
transformed_train_data, transformed_metadata = transformed_train_dataset
# write transformed train data to sink.
print ("Writing transformed training data...")
_ = (
transformed_train_data
| '{} - Write Transformed Data'.format(step) >> beam.io.tfrecordio.WriteToTFRecord(
file_path_prefix=transformed_train_data_location,
file_name_suffix=".tfrecords",
coder=tft.coders.ExampleProtoCoder(transformed_metadata.schema))
)
###### transform evaluation data #########################################
# Read raw training csv data.
step = 'Eval'
raw_eval_data = (
pipeline
| '{} - Read Raw Data'.format(step) >> beam.io.textio.ReadFromText(raw_eval_data_location)
| '{} - Remove Empty Rows'.format(step) >> beam.Filter(lambda line: line)
| '{} - Decode CSV Data'.format(step) >> beam.Map(converter.decode)
)
# Create a eval dataset from the data and schema.
raw_eval_dataset = (raw_eval_data, raw_metadata)
# Transform eval data based on produced transform_fn.
transformed_eval_dataset = (
(raw_eval_dataset, transform_fn)
| '{} - Transform'.format(step) >> tft_beam.TransformDataset()
)
# Get data from the transformed_eval_dataset.
transformed_eval_data, _ = transformed_eval_dataset
# Write transformed eval data to sink.
_ = (
transformed_eval_data
| '{} - Write Transformed Data'.format(step) >> beam.io.tfrecordio.WriteToTFRecord(
file_path_prefix=transformed_eval_data_location,
file_name_suffix=".tfrecords",
coder=tft.coders.ExampleProtoCoder(transformed_metadata.schema))
)
###### write transformation metadata #######################################################
# Write transform_fn.
print ("Writing transform artifacts...")
_ = (
transform_fn
| 'Write Transform Artifacts' >> tft_beam.WriteTransformFn(
transform_artifact_location)
)
Explanation: 2.2 Implement the Beam pipeline
End of explanation
!python -m pip freeze | grep tensorflow
%%writefile setup.py
from setuptools import setup, find_packages
setup(name='tfxdemo',
version='1.0',
packages=find_packages(),
install_requires=['tensorflow-transform==0.13.0',
'tensorflow-data-validation==0.13.1'],
)
#runner = 'DirectRunner'; OUTPUT_DIR = 'output' # on-prem
#runner = 'DirectRunner'; OUTPUT_DIR = 'gs://{}/census/tfx'.format(BUCKET) # hybrid
runner = 'DataflowRunner'; OUTPUT_DIR = 'gs://{}/census/tfx'.format(BUCKET) # on GCP
job_name = 'tft-census' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
RAW_SCHEMA_LOCATION = 'raw_schema.pbtxt'
TRANSFORM_ARTIFACTS_DIR = os.path.join(OUTPUT_DIR,'transform')
TRANSFORMED_DATA_DIR = os.path.join(OUTPUT_DIR,'transformed')
TEMP_DIR = os.path.join(OUTPUT_DIR, 'tmp')
args = {
'runner': runner,
'job_name': job_name,
'raw_schema_location': RAW_SCHEMA_LOCATION,
'raw_train_data_location': TRAIN_DATA_FILE,
'raw_eval_data_location': EVAL_DATA_FILE,
'transformed_train_data_location': os.path.join(TRANSFORMED_DATA_DIR, "train"),
'transformed_eval_data_location': os.path.join(TRANSFORMED_DATA_DIR, "eval"),
'transform_artifact_location': TRANSFORM_ARTIFACTS_DIR,
'temporary_dir': TEMP_DIR,
'project': PROJECT,
'temp_location': TEMP_DIR,
'staging_location': os.path.join(OUTPUT_DIR, 'staging'),
'max_num_workers': 8,
'save_main_session': False,
'setup_file': './setup.py'
}
if tf.gfile.Exists(OUTPUT_DIR):
print("Removing {} contents...".format(OUTPUT_DIR))
tf.gfile.DeleteRecursively(OUTPUT_DIR)
tf.logging.set_verbosity(tf.logging.ERROR)
print("Running TF Transform pipeline...")
print("Runner: {}".format(runner))
if runner == "DataflowRunner":
print("Launching Dataflow job {} ...".format(job_name))
print("Waitting until the Dataflow job finishes...")
print()
run_pipeline(args)
print()
print("Pipeline is done.")
Explanation: 1.4 Run data tranformation pipeline
End of explanation
!gsutil ls $OUTPUT_DIR/*
!gsutil ls $OUTPUT_DIR/transform/transform_fn
!gsutil cat $OUTPUT_DIR/transform/transformed_metadata/schema.pbtxt
Explanation: Check the outputs
End of explanation |
15,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and create a new Bundle. See Building a System for more details.
Step2: Overriding Computation Times
If compute_times is not empty (by either providing compute_times or compute_phases), the provided value will be used to compute the model instead of those in the times parameter.
In the case of a mesh dataset or orbit dataset, observations cannot be attached to the dataset, so a times parameter does not exist. In this case compute_times or compute_phases will always be used.
Step3: compute_times (when not empty) overrides the value of times when computing the model. However, passing times as a keyword argument to run_compute will take precedence over either - and override the computed times across all enabled datasets.
Step4: Phase-Time Conversion
In addition to the ability to provide compute_times, we can alternatively provide compute_phases. These two parameters are linked via a constraint (see the constraints tutorial), with compute_phases constrained by default.
Step5: Essentially, this constraint does the same thing as b.to_phase or b.to_time, using the appropriate t0 according to compute_phases_t0 from the top-level orbit in the hierarchy.
Step6: In order to provide compute_phases instead of compute_times, we must call b.flip_constraint.
Step7: Note that under the hood, PHOEBE always works in time-space, meaning it is the constrained value of compute_times that is being passed under-the-hood.
Also note that if directly passing compute_phases to b.add_dataset, the constraint will be flipped on our behalf. We would then need to flip the constraint in order to provide compute_times instead.
Finally, it is important to make the distinction that this is not adding support for observations in phases. If we have an old light curve that is only available in phase, we still must convert these to times manually (or via b.to_time). This restriction is intentional
Step8: In the case of times, this will automatically interpolate in phase-space if the provided time is outside the range of the referenced times array. If you have a logger enabled with at least the 'warning' level, this will raise a warning and state the phases at which the interpolation will be completed.
Step9: Determining & Plotting Residuals
One particularly useful case for interpolating is to compare a model (perhaps computed in phase-space) to a dataset with a large number of datapoints. We can do this directly by calling compute_residuals, which will handle any necessary interpolation and compare the dependent variable between the dataset and models.
Note that if there are more than one dataset or model attached to the bundle, it may be necessary to pass dataset and/or model (or filter in advanced and call compute_residuals on the filtered ParameterSet.
To see this in action, we'll first create a "fake" observational dataset, add some noise, recompute the model using compute_phases, and then calculate the residuals.
Step10: If we plot the dataset and model, we see that the model was only computed for one cycle, whereas the dataset extends further in time.
Step11: But we can also plot the residuals. Here, calculate_residuals is called internally, interpolating in phase-space, and then plotted in time-space. See the options for y in the plot API docs for more details. | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: Advanced: compute_times & compute_phases
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(0,10,101), dataset='lc01')
Explanation: As always, let's do imports and create a new Bundle. See Building a System for more details.
End of explanation
print(b.filter(qualifier=['times', 'compute_times'], context='dataset'))
b.set_value('compute_times', phoebe.linspace(0,3,11))
b.run_compute()
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
Explanation: Overriding Computation Times
If compute_times is not empty (by either providing compute_times or compute_phases), the provided value will be used to compute the model instead of those in the times parameter.
In the case of a mesh dataset or orbit dataset, observations cannot be attached to the dataset, so a times parameter does not exist. In this case compute_times or compute_phases will always be used.
End of explanation
b.run_compute(times=[0,0.2])
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
b.run_compute()
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
Explanation: compute_times (when not empty) overrides the value of times when computing the model. However, passing times as a keyword argument to run_compute will take precedence over either - and override the computed times across all enabled datasets.
End of explanation
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))
Explanation: Phase-Time Conversion
In addition to the ability to provide compute_times, we can alternatively provide compute_phases. These two parameters are linked via a constraint (see the constraints tutorial), with compute_phases constrained by default.
End of explanation
print(b.get_constraint('compute_phases'))
print(b.get_parameter('compute_phases_t0').choices)
Explanation: Essentially, this constraint does the same thing as b.to_phase or b.to_time, using the appropriate t0 according to compute_phases_t0 from the top-level orbit in the hierarchy.
End of explanation
b.flip_constraint('compute_phases', solve_for='compute_times')
b.set_value('compute_phases', phoebe.linspace(0,1,11))
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))
Explanation: In order to provide compute_phases instead of compute_times, we must call b.flip_constraint.
End of explanation
b.get_parameter('fluxes', context='model').get_value()
b.get_parameter('fluxes', context='model').interp_value(times=1.0)
b.get_parameter('fluxes', context='model').interp_value(times=phoebe.linspace(0,3,101))
Explanation: Note that under the hood, PHOEBE always works in time-space, meaning it is the constrained value of compute_times that is being passed under-the-hood.
Also note that if directly passing compute_phases to b.add_dataset, the constraint will be flipped on our behalf. We would then need to flip the constraint in order to provide compute_times instead.
Finally, it is important to make the distinction that this is not adding support for observations in phases. If we have an old light curve that is only available in phase, we still must convert these to times manually (or via b.to_time). This restriction is intentional: we do not want the mapping between phase and time to change as the ephemeris is changed or fitted, rather we want to try to map from phase to time using the ephemeris that was originally used when the dataset was recorded (if possible, or the best possible guess).
Interpolating the Model
Whether or not we used compute_times/compute_phases or not, it is sometimes useful to be able to interpolate on the resulting model. In the case where we provided compute_times/compute_phases to "down-sample" from a large dataset, this can be particularly useful.
We can call interp_value on any FloatArrayParameter.
End of explanation
b.get_parameter('fluxes', context='model').interp_value(times=5)
Explanation: In the case of times, this will automatically interpolate in phase-space if the provided time is outside the range of the referenced times array. If you have a logger enabled with at least the 'warning' level, this will raise a warning and state the phases at which the interpolation will be completed.
End of explanation
b.add_dataset('lc',
times=phoebe.linspace(0,10,1000),
dataset='lc01',
overwrite=True)
b.run_compute(irrad_method='none')
fluxes = b.get_value('fluxes', context='model')
b.set_value('fluxes', context='dataset', value=fluxes)
b.flip_constraint('compute_phases', solve_for='compute_times')
b.set_value('compute_phases', phoebe.linspace(0,1,101))
b.set_value('teff', component='primary', value=5950)
b.run_compute(irrad_method='none')
print(len(b.get_value('fluxes', context='dataset')), len(b.get_value('fluxes', context='model')))
b.calculate_residuals()
Explanation: Determining & Plotting Residuals
One particularly useful case for interpolating is to compare a model (perhaps computed in phase-space) to a dataset with a large number of datapoints. We can do this directly by calling compute_residuals, which will handle any necessary interpolation and compare the dependent variable between the dataset and models.
Note that if there are more than one dataset or model attached to the bundle, it may be necessary to pass dataset and/or model (or filter in advanced and call compute_residuals on the filtered ParameterSet.
To see this in action, we'll first create a "fake" observational dataset, add some noise, recompute the model using compute_phases, and then calculate the residuals.
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: If we plot the dataset and model, we see that the model was only computed for one cycle, whereas the dataset extends further in time.
End of explanation
afig, mplfig = b.plot(y='residuals', show=True)
Explanation: But we can also plot the residuals. Here, calculate_residuals is called internally, interpolating in phase-space, and then plotted in time-space. See the options for y in the plot API docs for more details.
End of explanation |
15,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Can you beat DeepVariant?
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download file with consolidated locus information
This is a combination of the make_examples output (containing the pileup tensors), the call_variants output (containing the CNN's output likelihoods to give us DeepVariant's answer), and GIAB ground truth labels.
Download data
Step3: Data wrangling and inspection
Step4: Top-level parsing of each locus
We can often work with the examples from there, but TensorFlow has some nice methods for filtering if we first tell it about the structure of the example and how to parse the features
Step5: Filtering to useful subsets of the datasets
We have only included 80 easy examples (out of over 3 million) along with all difficult examples.
Step6: The parsing above is only at the top level, and we will parse those binary 'variant' and 'genotypes' fields next, but first we can start to do some filtering using the simple features, namely 'multiallelic' and 'difficulty'
Step7: Parsing genotypes within each locus
Similar to what we did at the top level, unwrap each genotype, parse the nested protos in 'variant' and each genotype's 'example', and then turn all the remaining types into native Python data structures
Step8: Let's look at one locus
Step9: Inspecting some loci
Step10: All multiallelic loci will have at least 3 examples (pileup tensors) because there is one for each set of one or two alternate alleles. Those with three alternate alleles produce 6 examples.
Step12: Set up the game-play code
Step13: Now let's play the game!
Play the game
Step14: Play the game
Step15: Play the game | Python Code:
# @markdown Copyright 2020 Google LLC. \
# @markdown SPDX-License-Identifier: Apache-2.0
# @markdown (license hidden in Colab)
# Copyright 2020 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Can you beat DeepVariant?
End of explanation
%tensorflow_version 2.x
import tensorflow
print(tensorflow.__version__)
%%capture
# Don't worry if you see a `Failed building wheel` error, nucleus will still be installed correctly.
! pip install "google-nucleus"
import numpy as np
import os
import time
import tensorflow as tf
from tensorflow.core.example import example_pb2
from nucleus.protos import variants_pb2
from nucleus.util import vis
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/deepvariant/blob/r1.4/docs/cybdv_notebook.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/deepvariant/blob/r1.4/docs/cybdv_notebook.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
DeepVariant turns genomic data into a pileup image, and then uses a convolutional neural network (CNN) to classify these images. This notebook lets you try your hand at classifying these pileups, just like DeepVariant does.
DeepVariant gets over 99.8% of variants correct, so we chose the variants that were hardest to make this more interesting. The "difficult" variants were chosen because either DeepVariant was not very confident in its choice (lower than 90% likelihood of its top choice) or it made the wrong choice. We also selected 80 of the easy variants to start off with.
DeepVariant has not been trained on these variants. They are from the HG002 test set, labeled with "truth" from GIAB.
To play, go to Runtime -> Run all, then wait a bit until the game starts running (under "Play the game: EASY").
The Preparation section below contains setup code that is hidden for clarity. Expand it if you are curious about the details, or just skip down to "Play the game".
Preparation
Installs and imports
End of explanation
EASY_INPUT='gs://deepvariant/cybdv/cybdv_0.9.0_easy.tfrecord.gz'
DIFFICULT_INPUT='gs://deepvariant/cybdv/cybdv_0.9.0_difficult.tfrecord.gz'
easy_input='easy.tfrecord.gz'
difficult_input='difficult.tfrecord.gz'
! time gsutil cp $EASY_INPUT $easy_input
! time gsutil cp $DIFFICULT_INPUT $difficult_input
! ls -ltrsh
Explanation: Download file with consolidated locus information
This is a combination of the make_examples output (containing the pileup tensors), the call_variants output (containing the CNN's output likelihoods to give us DeepVariant's answer), and GIAB ground truth labels.
Download data
End of explanation
# Create a Dataset object for each set of tfrecords
raw_difficult_dataset = tf.data.TFRecordDataset(difficult_input, compression_type="GZIP")
raw_easy_dataset = tf.data.TFRecordDataset(easy_input, compression_type="GZIP")
# We can inspect an example to see what is inside:
for e in raw_easy_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(e.numpy())
print(example)
Explanation: Data wrangling and inspection
End of explanation
# Describe the features. These were defined in the prep script and can also be inspected above.
consolidated_variant_description = {
'locus_id': tf.io.FixedLenFeature([], tf.string),
'multiallelic': tf.io.FixedLenFeature([], tf.int64),
'difficulty': tf.io.FixedLenFeature([], tf.int64),
'genotypes': tf.io.VarLenFeature(tf.string)
}
# Quick parsing at the top level
def quick_parse_locus(locus_proto):
return tf.io.parse_single_example(locus_proto, consolidated_variant_description)
difficult_dataset = raw_difficult_dataset.map(quick_parse_locus)
easy_dataset = raw_easy_dataset.map(quick_parse_locus)
for e in easy_dataset.take(1):
print(e)
Explanation: Top-level parsing of each locus
We can often work with the examples from there, but TensorFlow has some nice methods for filtering if we first tell it about the structure of the example and how to parse the features:
End of explanation
# Simple counter
def count_calls(calls):
count = 0
for c in calls:
count += 1
return count
print('Number of easy examples:', count_calls(easy_dataset))
print('Number of difficult examples:', count_calls(difficult_dataset))
Explanation: Filtering to useful subsets of the datasets
We have only included 80 easy examples (out of over 3 million) along with all difficult examples.
End of explanation
# A few examples of easy variants
easy_biallelics = easy_dataset.filter(lambda x: tf.equal(x['multiallelic'], False))
easy_multiallelics = easy_dataset.filter(lambda x: tf.equal(x['multiallelic'], True))
# Where DeepVariant had less than 90% likelihood for its choice OR it chose wrong
difficult_biallelics = difficult_dataset.filter(lambda x: tf.equal(x['multiallelic'], False))
difficult_multiallelics = difficult_dataset.filter(lambda x: tf.equal(x['multiallelic'], True))
# In the prep script, we set difficult=100 when DeepVariant's top choice did not match the truth (i.e. DeepVariant got it wrong)
dv_missed = difficult_dataset.filter(lambda x: tf.equal(x['difficulty'], 100))
# Optional counting of examples (commented out as these take several seconds to run)
# Uncomment these if you want to see the counts of different types of variants.
# print('easy_biallelics count:', count_calls(easy_biallelics))
# print('easy_multiallelics count:', count_calls(easy_multiallelics))
# print('difficult_biallelics count', count_calls(difficult_biallelics))
# print('difficult_multiallelics count:', count_calls(difficult_multiallelics))
# print('dv_missed count:', count_calls(dv_missed))
Explanation: The parsing above is only at the top level, and we will parse those binary 'variant' and 'genotypes' fields next, but first we can start to do some filtering using the simple features, namely 'multiallelic' and 'difficulty':
End of explanation
def bytes_to_str(b):
if isinstance(b, type('')):
return b
elif isinstance(b, type(b'')):
return b.decode()
else:
raise ValueError('Incompatible type: {}. Expected bytes or str.'.format(type(b)))
def fully_parse_locus(top_level_parsed):
# where top_level_parsed = tf.io.parse_single_example(locus_proto, consolidated_variant_description)
def _clean_locus(locus):
return {
'locus_id': bytes_to_str(locus['locus_id'].numpy()),
'multiallelic': bool(locus['multiallelic'].numpy()),
'genotypes': locus['genotypes'],
'difficulty': locus['difficulty']
}
clean_locus = _clean_locus(top_level_parsed)
genotype_description = {
'example': tf.io.FixedLenFeature([], tf.string),
'truth_label': tf.io.FixedLenFeature([], tf.int64),
'genotype_probabilities': tf.io.VarLenFeature(tf.float32),
'dv_correct': tf.io.FixedLenFeature([], tf.int64),
'dv_label': tf.io.FixedLenFeature([], tf.int64),
'alt': tf.io.FixedLenFeature([], tf.string)
}
def _parse_genotype(sub_example):
return tf.io.parse_single_example(sub_example, genotype_description)
def _clean_genotype(e):
genotype_example = tf.train.Example()
genotype_example.ParseFromString(e['example'].numpy())
return {
'genotype_probabilities': list(e['genotype_probabilities'].values.numpy()),
'dv_correct': bool(e['dv_correct'].numpy()),
'dv_label': e['dv_label'].numpy(),
'truth_label': e['truth_label'].numpy(),
'example': genotype_example,
'alt': bytes_to_str(e['alt'].numpy())
}
genotypes = clean_locus['genotypes'].values
parsed_genotypes = []
for s in genotypes:
genotype_dict = _parse_genotype(s)
clean_genotype_dict = _clean_genotype(genotype_dict)
parsed_genotypes.append(clean_genotype_dict)
clean_locus['genotypes'] = parsed_genotypes
return clean_locus
Explanation: Parsing genotypes within each locus
Similar to what we did at the top level, unwrap each genotype, parse the nested protos in 'variant' and each genotype's 'example', and then turn all the remaining types into native Python data structures:
End of explanation
for e in easy_biallelics.take(1):
example = fully_parse_locus(e)
print(example)
Explanation: Let's look at one locus:
End of explanation
def pretty_print_locus(locus):
# initial information:
allelic_string = "Multiallelic" if locus['multiallelic'] else "Biallelic"
print("%s -- %s: %d example%s" % (locus['locus_id'], allelic_string, len(locus['genotypes']), "s" if locus['multiallelic'] else ""))
def show_all_genotypes(genotypes):
# Show pileup images for all genotypes
for g in genotypes:
vis.draw_deepvariant_pileup(example=g['example'])
# And here's how DeepVariant did:
print("Truth: %d, DeepVariant said: %d, Correct: %s" % (g['truth_label'], g['dv_label'], g['dv_correct']))
print("DeepVariant genotype likelihoods:", g['genotype_probabilities'])
print("\n\n")
def show_loci(loci, show_pileups=True):
for raw_locus in loci:
if show_pileups:
print("_____________________________________________________________")
locus = fully_parse_locus(raw_locus)
pretty_print_locus(locus)
if show_pileups:
show_all_genotypes(locus['genotypes'])
show_loci(easy_biallelics.take(1))
show_loci(easy_multiallelics.take(1))
Explanation: Inspecting some loci
End of explanation
show_loci(difficult_multiallelics.take(10), show_pileups=False)
Explanation: All multiallelic loci will have at least 3 examples (pileup tensors) because there is one for each set of one or two alternate alleles. Those with three alternate alleles produce 6 examples.
End of explanation
WRITE_NORMAL = "\x1b[0m"
WRITE_GREEN_BACKGROUND = "\x1b[102m"
WRITE_RED_BACKGROUND = "\x1b[101m"
def play_game(calls, pro_mode=False, put_results_here=None):
Args:
put_results_here: a list, allows saving results along the way even if the player stops the loop
# for example, calls = dataset.take(5)
print("Can you beat DeepVariant?: type 0, 1, or 2 just like DeepVariant's CNN would for each example.")
results = []
score = 0
total = 0
dv_score = 0
for c in calls:
locus = fully_parse_locus(c)
print("___________________________________________________________")
print(locus['locus_id'])
allelic_string = "Multiallelic" if locus['multiallelic'] else "Biallelic"
print("%s: %d example%s" % (allelic_string, len(locus['genotypes']), "s" if locus['multiallelic'] else ""))
quit_early = False
# For every genotype, show the user the images and ask for their guess
for e in locus['genotypes']:
# Draw pileups:
vis.draw_deepvariant_pileup(example=e['example'])
print('Genotype in question: ', e['alt'])
# Ask user until we get a 0, 1, or 2
while True:
try:
guess = input("Your answer (0, 1, or 2):")
if guess in ['exit', 'quit', 'q']:
quit_early = True
break
guess = int(guess)
if guess in [0,1,2]:
break
else:
print("Please enter only 0, 1, or 2.")
except:
print("Please enter only 0, 1, or 2. Or enter q to quit.")
continue
if quit_early:
break
truth = e['truth_label']
if truth == guess:
print(WRITE_GREEN_BACKGROUND + "You are correct!", WRITE_NORMAL)
score += 1
else:
print(WRITE_RED_BACKGROUND + "Nope! Sorry it's actually", truth, WRITE_NORMAL)
total += 1
# And here's how DeepVariant did:
if e['dv_correct']:
dv_score += 1
dv_result_string = WRITE_GREEN_BACKGROUND + "is correct" + WRITE_NORMAL if e['dv_correct'] else WRITE_RED_BACKGROUND + "failed" + WRITE_NORMAL
print("DeepVariant %s with likelihoods:" % (dv_result_string), e['genotype_probabilities'])
result = {'id': locus['locus_id'], 'truth': truth, 'guess': guess, 'dv_label': e['dv_label']}
if pro_mode:
result['user_supplied_classification'] = input("Classification?")
if put_results_here != None:
put_results_here.append(result)
results.append(result)
if quit_early:
break
print("=============================================================")
print("=============================================================")
print("=============================================================")
print("Your score is", score, "/", total)
print("DeepVariant's score is", dv_score)
return results
Explanation: Set up the game-play code
End of explanation
easy_biallelic_results = play_game(easy_biallelics.shuffle(80).take(6))
Explanation: Now let's play the game!
Play the game: EASY
These are examples that DeepVariant was at least 99% sure about and got correct. Think of this as a tutorial. Once you can easily get them all right, try the difficult examples below.
Your job is to pick 0, 1, or 2, depending on how many copies of the given alternate allele you see in the pileup.
End of explanation
difficult_biallelic_results = play_game(difficult_biallelics.shuffle(1000).take(10))
Explanation: Play the game: DIFFICULT
End of explanation
easy_multiallelic_results = play_game(easy_multiallelics.shuffle(9).take(3))
difficult_multiallelic_results = play_game(difficult_multiallelics.shuffle(50).take(3))
Explanation: Play the game: Multiallelics only
For multiallelics, it's possible to cheat because the pileups are related. Once you know the answer to one or two pileups, you can use that information to help infer the remaining calls. Note that DeepVariant doesn't get this information, so you have an unfair advantage here.
Just be careful, the base differs from ref channel will light up for ALL alt alleles. Pay more attention to the reads highlighted (white) in the read supports variant channel, and choose 0, 1, or 2 depending on how many copies of THIS alternate allele are shown.
End of explanation |
15,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
Plotting SWR Process Results
This notebook demonstrates the use of the SwrObs and SwrStage, SwrBudget, SwrFlow, and SwrExchange, SwrStructure, classes to read binary SWR Process observation, stage, budget, reach to reach flows, reach-aquifer exchange, and structure files. It demonstrates these capabilities by loading these binary file types and showing examples of plotting SWR Process data. An example showing how the simulated water surface profile at a selected time along a selection of reaches can be plotted is also presented.
Step1: Load SWR Process observations
Create an instance of the SwrObs class and load the observation data.
Step2: Plot the data from the binary SWR Process observation file
Step3: Load the same data from the individual binary SWR Process files
Load discharge data from the flow file. The flow file contains the simulated flow between connected reaches for each connection in the model.
Step4: Load discharge data from the structure file. The structure file contains the simulated structure flow for each reach with a structure.
Step5: Load stage data from the stage file. The flow file contains the simulated stage for each reach in the model.
Step6: Load budget data from the budget file. The budget file contains the simulated budget for each reach group in the model. The budget file also contains the stage data for each reach group. In this case the number of reach groups equals the number of reaches in the model.
Step7: Plot the data loaded from the individual binary SWR Process files.
Note that the plots are identical to the plots generated from the binary SWR observation data.
Step8: Plot simulated water surface profiles
Simulated water surface profiles can be created using the ModelCrossSection class.
Several things that we need in addition to the stage data include reach lengths and bottom elevations. We load these data from an existing file.
Step9: The contents of the file are shown in the cell below.
Step10: Create an instance of the SwrStage class for SWR Process stage data.
Step11: Create a selection condition (iprof) that can be used to extract data for the reaches of interest (reaches 0, 1, and 8 through 17). Use this selection condition to extract reach lengths (from sd['RLEN']) and the bottom elevation (from sd['BELEV']) for the reaches of interest. The selection condition will also be used to extract the stage data for reaches of interest.
Step12: Create a fake model instance so that the ModelCrossSection class can be used.
Step13: Create an array with the x position at the downstream end of each reach, which will be used to color the plots below each reach.
Step14: Plot simulated water surface profiles for 8 times. | Python Code:
%matplotlib inline
from IPython.display import Image
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
#Set the paths
datapth = os.path.join('..', 'data', 'swr_test')
# SWR Process binary files
files = ('SWR004.obs', 'SWR004.vel', 'SWR004.str', 'SWR004.stg', 'SWR004.flow')
Explanation: FloPy
Plotting SWR Process Results
This notebook demonstrates the use of the SwrObs and SwrStage, SwrBudget, SwrFlow, and SwrExchange, SwrStructure, classes to read binary SWR Process observation, stage, budget, reach to reach flows, reach-aquifer exchange, and structure files. It demonstrates these capabilities by loading these binary file types and showing examples of plotting SWR Process data. An example showing how the simulated water surface profile at a selected time along a selection of reaches can be plotted is also presented.
End of explanation
sobj = flopy.utils.SwrObs(os.path.join(datapth, files[0]))
ts = sobj.get_data()
Explanation: Load SWR Process observations
Create an instance of the SwrObs class and load the observation data.
End of explanation
fig = plt.figure(figsize=(6, 12))
ax1 = fig.add_subplot(3, 1, 1)
ax1.semilogx(ts['totim']/3600., -ts['OBS1'], label='OBS1')
ax1.semilogx(ts['totim']/3600., -ts['OBS2'], label='OBS2')
ax1.semilogx(ts['totim']/3600., -ts['OBS9'], label='OBS3')
ax1.set_ylabel('Flow, in cubic meters per second')
ax1.legend()
ax = fig.add_subplot(3, 1, 2, sharex=ax1)
ax.semilogx(ts['totim']/3600., -ts['OBS4'], label='OBS4')
ax.semilogx(ts['totim']/3600., -ts['OBS5'], label='OBS5')
ax.set_ylabel('Flow, in cubic meters per second')
ax.legend()
ax = fig.add_subplot(3, 1, 3, sharex=ax1)
ax.semilogx(ts['totim']/3600., ts['OBS6'], label='OBS6')
ax.semilogx(ts['totim']/3600., ts['OBS7'], label='OBS7')
ax.set_xlim(1, 100)
ax.set_ylabel('Stage, in meters')
ax.set_xlabel('Time, in hours')
ax.legend();
Explanation: Plot the data from the binary SWR Process observation file
End of explanation
sobj = flopy.utils.SwrFlow(os.path.join(datapth, files[1]))
times = np.array(sobj.get_times())/3600.
obs1 = sobj.get_ts(irec=1, iconn=0)
obs2 = sobj.get_ts(irec=14, iconn=13)
obs4 = sobj.get_ts(irec=4, iconn=3)
obs5 = sobj.get_ts(irec=5, iconn=4)
Explanation: Load the same data from the individual binary SWR Process files
Load discharge data from the flow file. The flow file contains the simulated flow between connected reaches for each connection in the model.
End of explanation
sobj = flopy.utils.SwrStructure(os.path.join(datapth, files[2]))
obs3 = sobj.get_ts(irec=17, istr=0)
Explanation: Load discharge data from the structure file. The structure file contains the simulated structure flow for each reach with a structure.
End of explanation
sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3]))
obs6 = sobj.get_ts(irec=13)
Explanation: Load stage data from the stage file. The flow file contains the simulated stage for each reach in the model.
End of explanation
sobj = flopy.utils.SwrBudget(os.path.join(datapth, files[4]))
obs7 = sobj.get_ts(irec=17)
Explanation: Load budget data from the budget file. The budget file contains the simulated budget for each reach group in the model. The budget file also contains the stage data for each reach group. In this case the number of reach groups equals the number of reaches in the model.
End of explanation
fig = plt.figure(figsize=(6, 12))
ax1 = fig.add_subplot(3, 1, 1)
ax1.semilogx(times, obs1['flow'], label='OBS1')
ax1.semilogx(times, obs2['flow'], label='OBS2')
ax1.semilogx(times, -obs3['strflow'], label='OBS3')
ax1.set_ylabel('Flow, in cubic meters per second')
ax1.legend()
ax = fig.add_subplot(3, 1, 2, sharex=ax1)
ax.semilogx(times, obs4['flow'], label='OBS4')
ax.semilogx(times, obs5['flow'], label='OBS5')
ax.set_ylabel('Flow, in cubic meters per second')
ax.legend()
ax = fig.add_subplot(3, 1, 3, sharex=ax1)
ax.semilogx(times, obs6['stage'], label='OBS6')
ax.semilogx(times, obs7['stage'], label='OBS7')
ax.set_xlim(1, 100)
ax.set_ylabel('Stage, in meters')
ax.set_xlabel('Time, in hours')
ax.legend();
Explanation: Plot the data loaded from the individual binary SWR Process files.
Note that the plots are identical to the plots generated from the binary SWR observation data.
End of explanation
sd = np.genfromtxt(os.path.join(datapth, 'SWR004.dis.ref'), names=True)
Explanation: Plot simulated water surface profiles
Simulated water surface profiles can be created using the ModelCrossSection class.
Several things that we need in addition to the stage data include reach lengths and bottom elevations. We load these data from an existing file.
End of explanation
fc = open(os.path.join(datapth, 'SWR004.dis.ref')).readlines()
fc
Explanation: The contents of the file are shown in the cell below.
End of explanation
sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3]))
Explanation: Create an instance of the SwrStage class for SWR Process stage data.
End of explanation
iprof = sd['IRCH'] > 0
iprof[2:8] = False
dx = np.extract(iprof, sd['RLEN'])
belev = np.extract(iprof, sd['BELEV'])
Explanation: Create a selection condition (iprof) that can be used to extract data for the reaches of interest (reaches 0, 1, and 8 through 17). Use this selection condition to extract reach lengths (from sd['RLEN']) and the bottom elevation (from sd['BELEV']) for the reaches of interest. The selection condition will also be used to extract the stage data for reaches of interest.
End of explanation
ml = flopy.modflow.Modflow()
dis = flopy.modflow.ModflowDis(ml, nrow=1, ncol=dx.shape[0], delr=dx, top=4.5, botm=belev.reshape(1,1,12))
Explanation: Create a fake model instance so that the ModelCrossSection class can be used.
End of explanation
x = np.cumsum(dx)
Explanation: Create an array with the x position at the downstream end of each reach, which will be used to color the plots below each reach.
End of explanation
fig = plt.figure(figsize=(12, 12))
for idx, v in enumerate([19, 29, 34, 39, 44, 49, 54, 59]):
ax = fig.add_subplot(4, 2, idx+1)
s = sobj.get_data(idx=v)
stage = np.extract(iprof, s['stage'])
xs = flopy.plot.ModelCrossSection(model=ml, line={'Row': 0})
xs.plot_fill_between(stage.reshape(1,1,12), colors=['none', 'blue'], ax=ax, edgecolors='none')
linecollection = xs.plot_grid(ax=ax, zorder=10)
ax.fill_between(np.append(0., x), y1=np.append(belev[0], belev), y2=-0.5,
facecolor='0.5', edgecolor='none', step='pre')
ax.set_title('{} hours'.format(times[v]))
ax.set_ylim(-0.5, 4.5)
Explanation: Plot simulated water surface profiles for 8 times.
End of explanation |
15,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantile regression
This example page shows how to use statsmodels' QuantReg class to replicate parts of the analysis published in
Koenker, Roger and Kevin F. Hallock. "Quantile Regressioin". Journal of Economic Perspectives, Volume 15, Number 4, Fall 2001, Pages 143–156
We are interested in the relationship between income and expenditures on food for a sample of working class Belgian households in 1857 (the Engel data).
Setup
We first need to load some modules and to retrieve the data. Conveniently, the Engel dataset is shipped with statsmodels.
Step1: Least Absolute Deviation
The LAD model is a special case of quantile regression where q=0.5
Step2: Visualizing the results
We estimate the quantile regression model for many quantiles between .05 and .95, and compare best fit line from each of these models to Ordinary Least Squares results.
Prepare data for plotting
For convenience, we place the quantile regression results in a Pandas DataFrame, and the OLS results in a dictionary.
Step3: First plot
This plot compares best fit lines for 10 quantile regression models to the least squares fit. As Koenker and Hallock (2001) point out, we see that
Step4: Second plot
The dotted black lines form 95% point-wise confidence band around 10 quantile regression estimates (solid black line). The red lines represent OLS regression results along with their 95% confindence interval.
In most cases, the quantile regression point estimates lie outside the OLS confidence interval, which suggests that the effect of income on food expenditure may not be constant across the distribution. | Python Code:
%matplotlib inline
from __future__ import print_function
import patsy
import numpy as np
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
from statsmodels.regression.quantile_regression import QuantReg
data = sm.datasets.engel.load_pandas().data
data.head()
Explanation: Quantile regression
This example page shows how to use statsmodels' QuantReg class to replicate parts of the analysis published in
Koenker, Roger and Kevin F. Hallock. "Quantile Regressioin". Journal of Economic Perspectives, Volume 15, Number 4, Fall 2001, Pages 143–156
We are interested in the relationship between income and expenditures on food for a sample of working class Belgian households in 1857 (the Engel data).
Setup
We first need to load some modules and to retrieve the data. Conveniently, the Engel dataset is shipped with statsmodels.
End of explanation
mod = smf.quantreg('foodexp ~ income', data)
res = mod.fit(q=.5)
print(res.summary())
Explanation: Least Absolute Deviation
The LAD model is a special case of quantile regression where q=0.5
End of explanation
quantiles = np.arange(.05, .96, .1)
def fit_model(q):
res = mod.fit(q=q)
return [q, res.params['Intercept'], res.params['income']] + \
res.conf_int().loc['income'].tolist()
models = [fit_model(x) for x in quantiles]
models = pd.DataFrame(models, columns=['q', 'a', 'b','lb','ub'])
ols = smf.ols('foodexp ~ income', data).fit()
ols_ci = ols.conf_int().loc['income'].tolist()
ols = dict(a = ols.params['Intercept'],
b = ols.params['income'],
lb = ols_ci[0],
ub = ols_ci[1])
print(models)
print(ols)
Explanation: Visualizing the results
We estimate the quantile regression model for many quantiles between .05 and .95, and compare best fit line from each of these models to Ordinary Least Squares results.
Prepare data for plotting
For convenience, we place the quantile regression results in a Pandas DataFrame, and the OLS results in a dictionary.
End of explanation
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
fig, ax = plt.subplots(figsize=(8, 6))
for i in range(models.shape[0]):
y = get_y(models.a[i], models.b[i])
ax.plot(x, y, linestyle='dotted', color='grey')
y = get_y(ols['a'], ols['b'])
ax.plot(x, y, color='red', label='OLS')
ax.scatter(data.income, data.foodexp, alpha=.2)
ax.set_xlim((240, 3000))
ax.set_ylim((240, 2000))
legend = ax.legend()
ax.set_xlabel('Income', fontsize=16)
ax.set_ylabel('Food expenditure', fontsize=16);
Explanation: First plot
This plot compares best fit lines for 10 quantile regression models to the least squares fit. As Koenker and Hallock (2001) point out, we see that:
Food expenditure increases with income
The dispersion of food expenditure increases with income
The least squares estimates fit low income observations quite poorly (i.e. the OLS line passes over most low income households)
End of explanation
n = models.shape[0]
p1 = plt.plot(models.q, models.b, color='black', label='Quantile Reg.')
p2 = plt.plot(models.q, models.ub, linestyle='dotted', color='black')
p3 = plt.plot(models.q, models.lb, linestyle='dotted', color='black')
p4 = plt.plot(models.q, [ols['b']] * n, color='red', label='OLS')
p5 = plt.plot(models.q, [ols['lb']] * n, linestyle='dotted', color='red')
p6 = plt.plot(models.q, [ols['ub']] * n, linestyle='dotted', color='red')
plt.ylabel(r'$\beta_{income}$')
plt.xlabel('Quantiles of the conditional food expenditure distribution')
plt.legend()
plt.show()
Explanation: Second plot
The dotted black lines form 95% point-wise confidence band around 10 quantile regression estimates (solid black line). The red lines represent OLS regression results along with their 95% confindence interval.
In most cases, the quantile regression point estimates lie outside the OLS confidence interval, which suggests that the effect of income on food expenditure may not be constant across the distribution.
End of explanation |
15,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Amongst the H3N8 isolates, here is the full range of patristic distances present in the PB2 gene tree.
Step1: Because I don't have the taxa names identical between the tree and the transmission graph, therefore, I will use a
dictionary mapping to map the taxa name in the graph to the taxa name in the tree.
Step2: Aaaaand here, let's show what patristic distances we're capturing! | Python Code:
patristic_distances = dp.treecalc.PatristicDistanceMatrix(tree=tree).distances()
plt.hist(patristic_distances)
plt.xlabel('Patristic Distance')
plt.ylabel('Counts')
plt.title('Histogram of Patristic \n Distances in the PB2 Tree')
transmission_graph = nx.read_gpickle('Minto Flats.pkl')
transmission_graph.nodes(data=True)
for node in transmission_graph.nodes(data=True):
if node[1]['subtype'] != 'H3N8':
transmission_graph.remove_node(node[0])
# transmission_graph.nodes(data=True)
len(transmission_graph.nodes())
Explanation: Amongst the H3N8 isolates, here is the full range of patristic distances present in the PB2 gene tree.
End of explanation
taxadict = {str(t).split("'")[0].split("|")[0].replace("_", " "):t for t in tree.taxon_set}
# taxadict
graph_pds = []
for (taxa1, taxa2) in transmission_graph.edges():
# Note here that some of the taxa names present in the network are absent from the trees.
# As such, I will have to ask Kyle to re-run the H3N8 trees. Might take the chance to show
# him some Python.
if taxa1 in taxadict.keys() and taxa2 in taxadict.keys():
t1 = taxadict[taxa1]
t2 = taxadict[taxa2]
pd = dp.treecalc.patristic_distance(tree, t1, t2)
graph_pds.append(pd)
Explanation: Because I don't have the taxa names identical between the tree and the transmission graph, therefore, I will use a
dictionary mapping to map the taxa name in the graph to the taxa name in the tree.
End of explanation
plt.hist(graph_pds)
plt.xlabel('Patristic Distance')
plt.ylabel('Counts')
plt.title('Histogram of Patristic \n Distances Captured by the Network')
plt.show()
Explanation: Aaaaand here, let's show what patristic distances we're capturing!
End of explanation |
15,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 02
Step1: Below is a series of numbers. Don't worry what they mean. Just for fun, let's think of them as neural activations.
Step2: Create a boolean variable called spike to detect a sudden increase in the values.
All variables must be initialized. Go ahead and initialize the variable by calling run() on its initializer
Step3: Loop through the data and update the spike variable when there is a significant increase
Step4: You forgot to close the session! Here, let me do it | Python Code:
import tensorflow as tf
sess = tf.InteractiveSession()
Explanation: Ch 02: Concept 05
Using variables
Here we go, here we go, here we go! Moving on from those simple examples, let's get a better understanding of variables. Start with a session:
End of explanation
raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13]
Explanation: Below is a series of numbers. Don't worry what they mean. Just for fun, let's think of them as neural activations.
End of explanation
spike = tf.Variable(False)
spike.initializer.run()
Explanation: Create a boolean variable called spike to detect a sudden increase in the values.
All variables must be initialized. Go ahead and initialize the variable by calling run() on its initializer:
End of explanation
for i in range(1, len(raw_data)):
if raw_data[i] - raw_data[i-1] > 5:
updater = tf.assign(spike, tf.constant(True))
updater.eval()
else:
tf.assign(spike, False).eval()
print("Spike", spike.eval())
Explanation: Loop through the data and update the spike variable when there is a significant increase:
End of explanation
sess.close()
Explanation: You forgot to close the session! Here, let me do it:
End of explanation |
15,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<b>This notebook tries to detect "special words" in a corpus of mailing lists.</b>
(for now it works with two mailing lists only)
-it computes and exports in .csv files the word counts (words and their occurrences)
-it computes and exports in .csv files the list of common words that are introduced by different people in different lists
it computes and print the 'influential words' (see definition in the box)
Further extensions
Step1: First, we shall compute the word counts on the lists.
Data will be also exported to .csv files
Step2: Let's print some useful descriptive data
Step3: We want to compute the list of common words that are introduced by different people in different lists.
The results are exported in a .csv file
Step4: Let's identify "influential words" (see definition below) and print them | Python Code:
from bigbang.archive import Archive
from bigbang.archive import load as load_archive
import bigbang.parse as parse
import bigbang.graph as graph
import bigbang.mailman as mailman
import bigbang.process as process
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
from pprint import pprint as pp
import pytz
import numpy as np
import math
import nltk
from itertools import repeat
from nltk.stem.lancaster import LancasterStemmer
st = LancasterStemmer()
from nltk.corpus import stopwords
import re
#insert TWO urls of mailing lists
urls = [ "http://mm.icann.org/pipermail/wp4/",
"http://mm.icann.org/pipermail/ge/"]
try:
arch_paths =[]
for url in urls:
arch_paths.append('../archives/'+url[:-1].replace('://','_/')+'.csv')
archives = [load_archive(arch_path) for arch_path in arch_paths]
except:
arch_paths =[]
for url in urls:
arch_paths.append('../archives/'+url[:-1].replace('//','/')+'.csv')
archives = [load_archive(arch_path) for arch_path in arch_paths]
#to stem or not to stem?
#if stem is set to True, then words are converted into their stem(no plurals, no suffixes, etc.)
#if stem is set to False, then words are processed for their literal spelling
stem = False
Explanation: <b>This notebook tries to detect "special words" in a corpus of mailing lists.</b>
(for now it works with two mailing lists only)
-it computes and exports in .csv files the word counts (words and their occurrences)
-it computes and exports in .csv files the list of common words that are introduced by different people in different lists
it computes and print the 'influential words' (see definition in the box)
Further extensions:
-from two lists to n lists !
End of explanation
#Compute word count on the first list
wordcount={}
for row in archives[0].data.iterrows():
if row[1]["Body"] is not None:
w = row[1]["Body"].replace("'", "")
k = re.sub(r'[^\w]', ' ', w)
t = nltk.tokenize.word_tokenize(k)
for g in t:
try:
if stem: word = st.stem(g)
else: word = g
except:
print g
pass
if word in stopwords.words('english'):
continue
if word not in wordcount:
wordcount[word] = [1]
wordcount[word].append(row[0])
wordcount[word].append(row[1]["Date"])
wordcount[word].append(row[1]["From"])
wordcount[word].append(row[1]["In-Reply-To"])
else:
wordcount[word][0] += 1
wd = wordcount #In case
#Compute word count on the second list
wordcount1={}
i = 0
for row in archives[1].data.iterrows()[:100]:
i+=1
print i
if row[1]["Body"] is not None:
w = row[1]["Body"].replace("'", "")
k = re.sub(r'[^\w]', ' ', w)
t = nltk.tokenize.word_tokenize(k)
for g in t:
try:
if stem: word = st.stem(g)
else: word = g
except:
print g
pass
if word in stopwords.words('english'):
continue
if word not in wordcount1:
wordcount1[word] = [1]
wordcount1[word].append(row[0])
wordcount1[word].append(row[1]["Date"])
wordcount1[word].append(row[1]["From"])
wordcount1[word].append(row[1]["In-Reply-To"])
else:
wordcount1[word][0] += 1
#Create and export a wordcount information dataframe per mailing list
#set the variable 'path' as a valid directory path where to store the files
path = 'c:/users/davide/bigbang/'
asd = pd.DataFrame(wordcount)
new_dataframe = asd.transpose()
new_dataframe.columns = ["Wordcount", "Message-ID", "Date","From","In-Reply-To"]
new_dataframe.to_csv(path+'wordcount_info_'+urls[0].split('/')[-2]+'.csv')
asd1 = pd.DataFrame(wordcount1)
new_dataframe1 = asd1.transpose()
new_dataframe1.columns = ["Wordcount", "Message-ID", "Date","From","In-Reply-To"]
new_dataframe1.to_csv(path+'wordcount_info_'+urls[1].split('/')[-2]+'.csv')
print 'File exported!'
print 'Check '+path+'wordcount_info_'+urls[0].split('/')[-2]+'.csv and '+path+'wordcount_info_'+urls[1].split('/')[-2]+'.csv'
Explanation: First, we shall compute the word counts on the lists.
Data will be also exported to .csv files
End of explanation
print 'Number of unique words in mailinglist '+urls[0]
print len(wordcount)
print 'Number of unique words in mailinglist '+urls[1]
print len(wordcount)
samewordcount=0
for word in wordcount:
if word in wordcount1:
samewordcount += 1
print 'Number of same unique words in two mailing lists'
print samewordcount
samewords = {}
for word in wordcount:
if word in wordcount1:
if wordcount[word][3] == wordcount1[word][3]:
samewords[word] = [wordcount[word][0],wordcount[word][3],wordcount[word][2],
wordcount1[word][0],wordcount1[word][3],wordcount1[word][2]]
print 'Total number of same words that are introduced by same people'
print len(samewords.keys())
#build dataframe of information of those words introduced by same people
#and export to file
df1 = pd.DataFrame(samewords)
samewords_sameauthor_dataframe = df1.transpose()
samewords_sameauthor_dataframe.columns = ["Wordcount1", "From1", "Date1","Wordcount2", "From2", "Date2"]
samewords_sameauthor_dataframe.to_csv(path+'samewords_sameauthor.csv')
print 'File exported!'
print 'Check '+path+'samewords_sameauthor.csv'
samewordcount = 0
for word in wordcount:
if wordcount[word][0] >= 100 and wordcount[word][0] <= 500:
if word in wordcount1:
if wordcount1[word][0] >= 100 and wordcount1[word][0] <= 500:
samewordcount += 1
print 'Among 100-500 appearance words, the number of common words between two mailing-list'
print samewordcount
same_person_count = 0
for word in wordcount:
if wordcount[word][0] >= 100 and wordcount[word][0] <= 500:
if word in wordcount1:
if wordcount1[word][0] >= 100 and wordcount1[word][0] <= 500:
if wordcount[word][3] == wordcount1[word][3]:
#print word
same_person_count += 1
print 'Among 100-500 appearance words, the number of common words between two mailing-list that are first introduced by same people'
print samecount
Explanation: Let's print some useful descriptive data:
End of explanation
#compute common word list(introduced by different people in different lists)
#and print the number
commonwords = {}
for word in wordcount:
if wordcount[word][0] >= 100 and wordcount[word][0] <= 500:
if word in wordcount1:
if wordcount1[word][0] >= 100 and wordcount1[word][0] <= 500:
if wordcount[word][3] != wordcount1[word][3]:
commonwords[word] = [wordcount[word][0],wordcount[word][3],wordcount[word][2],
wordcount1[word][0],wordcount1[word][3],wordcount1[word][2]]
print 'Number of common words introduced by different people in different lists'
print len(commonwords)
#build dataframe of information of those words introduced by different people
#and export to file
df1 = pd.DataFrame(commonwords)
commonword_differentauthor_dataframe = df1.transpose()
commonword_differentauthor_dataframe.columns = ["Wordcount1", "From1", "Date1","Wordcount2", "From2", "Date2"]
commonword_differentauthor_dataframe.to_csv(path+'commonwords_differentauthor.csv')
print 'File exported!'
print 'Check '+path+'commonwords_differentauthor.csv'
Explanation: We want to compute the list of common words that are introduced by different people in different lists.
The results are exported in a .csv file
End of explanation
#Compute 'influential words', the list of words that have potential of idea flows.
#Definition: A is introduced by p in list1 first, then q saw it and then
#introduced the word A to list 2, vice versa. We defined q saw as q said sth in list1 before p poped out the word.
#Total list of such word A.
#Build a dictionary with senders and date of first participation for each mailing list
first_participation = {}
for row in archives[0].data.iterrows():
if row[1]["From"] not in first_participation:
first_participation[row[1]["From"]] = row[1]["Date"]
first_participation1 = {}
for row in archives[1].data.iterrows():
if row[1]["From"] not in first_participation1:
first_participation1[row[1]["From"]] = row[1]["Date"]
time_influence = 0
influence_list = {}
for word in commonwords:
if commonwords[word][2] > commonwords[word][5]: #Author2 comes first
if commonwords[word][1] in first_participation1: #Check if author1 in list2
if first_participation1[commonwords[word][1]] < commonwords[word][5]: #Check if author1\
#in list2 and exists before the word first introduced in list2
influence_list[word] = commonwords[word]
time_influence += 1
else: #Author1 comes first
if commonwords[word][4] in first_participation:
if first_participation[commonwords[word][4]] < commonwords[word][2]:
influence_list[word] = commonwords[word]
time_influence += 1
#print the list of influential words (exclude numbers)
if len(influence_list.keys()) == 0: print 'No influential words detected'
for word, info in influence_list.iteritems():
if not word.isdigit():
print '"'+word+'"'
print info
print ' '
Explanation: Let's identify "influential words" (see definition below) and print them
End of explanation |
15,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#PMT/ADC" data-toc-modified-id="PMT/ADC-1"><span class="toc-item-num">1 </span>PMT/ADC</a></div><div class="lev2 toc-item"><a href="#Example-of-how-to-compress-bytes-(e.g.,-JSON)-to-bzip2" data-toc-modified-id="Example-of-how-to-compress-bytes-(e.g.,-JSON)-to-bzip2-11"><span class="toc-item-num">1.1 </span>Example of how to compress bytes (e.g., JSON) to bzip2</a></div><div class="lev1 toc-item"><a href="#Pump" data-toc-modified-id="Pump-2"><span class="toc-item-num">2 </span>Pump</a></div>
# PMT/ADC
Step1: Example of how to compress bytes (e.g., JSON) to bzip2
Step2: Pump | Python Code:
import gtk
import gobject
import threading
import datetime as dt
import matplotlib as mpl
import matplotlib.style
import numpy as np
import pandas as pd
from streaming_plot import StreamingPlot
def _generate_data(stop_event, data_ready, data):
'''
Generate random data to emulate, e.g., reading data from ADC.
The function is run in its own thread.
Parameters
----------
stop_event : threading.Event
Function returns when :data:`stop_event` is set.
data_ready : threading.Event
Function sets :data:`data_ready` whenever new data is available.
data : list
The function **MUST**:
- Return when the :data:`stop_event` is set.
- Set :data:`data_ready` event whenever new data is available.
'''
delta_t = dt.timedelta(seconds=.1)
samples_per_plot = 5
while True:
time_0 = dt.datetime.now()
values_i = np.random.rand(samples_per_plot)
absolute_times_i = pd.Series([time_0 + i * delta_t
for i in xrange(len(values_i))])
data_i = pd.Series(values_i, index=absolute_times_i)
data.append(data_i)
data_ready.set()
if stop_event.wait(samples_per_plot *
delta_t.total_seconds()):
break
def measure_dialog(f_data, duration_s=None, auto_start=True,
auto_close=True):
'''
Launch a GTK dialog and plot data
Parameters
----------
f_data : function(stop_event, data_ready, data)
Function to run to generate data, e.g., read data from ADC.
The function is run in its own thread and is provided the following
parameters:
- :data:`stop_event` : threading.Event
- :data:`data_ready` : threading.Event
- :data:`data` : list
The function **MUST**:
- Return when the :data:`stop_event` is set.
- Set :data:`data_ready` event whenever new data is available.
duration_s : float, optional
Length of time to measure for (in seconds).
If duration is not specified, measure until window is closed or
``Pause`` button is pressed.
auto_start : bool, optional
Automatically start measuring when the dialog is launched.
Default is ``True``.
auto_close : bool, optional
If ``duration_s`` is specified, automatically close window once the
measurement duration has passed (unless the ``Pause`` button has been
pressed.
Default is ``True``.
'''
# `StreamingPlot` class uses threads. Need to initialize GTK to use
# threads. See [here][1] for more information.
#
# [1]: http://faq.pygtk.org/index.py?req=show&file=faq20.001.htp
gtk.gdk.threads_init()
with mpl.style.context('seaborn',
{'image.cmap': 'gray',
'image.interpolation' : 'none'}):
# Create dialog window to wrap PMT measurement view widget.
dialog = gtk.Dialog()
dialog.set_default_size(800, 600)
view = StreamingPlot(data_func=f_data)
dialog.get_content_area().pack_start(view.widget, True, True)
dialog.connect('check-resize', lambda *args: view.on_resize())
dialog.set_position(gtk.WIN_POS_MOUSE)
dialog.show_all()
view.fig.tight_layout()
if auto_start:
gobject.idle_add(view.start)
def _auto_close(*args):
if not view.stop_event.is_set():
# User did not explicitly pause the measurement. Automatically
# close the measurement and continue.
dialog.destroy()
if duration_s is not None:
stop_func = _auto_close if auto_close else view.pause
gobject.timeout_add(duration_s, stop_func)
measurement_complete = threading.Event()
view.widget.connect('destroy', lambda *args: measurement_complete.set())
dialog.run()
dialog.destroy()
measurement_complete.wait()
if view.data:
return pd.concat(view.data)
else:
return None
data = measure_dialog(_generate_data, duration_s=5000, auto_close=True)
view = StreamingPlot(data_func=_generate_data)
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#PMT/ADC" data-toc-modified-id="PMT/ADC-1"><span class="toc-item-num">1 </span>PMT/ADC</a></div><div class="lev2 toc-item"><a href="#Example-of-how-to-compress-bytes-(e.g.,-JSON)-to-bzip2" data-toc-modified-id="Example-of-how-to-compress-bytes-(e.g.,-JSON)-to-bzip2-11"><span class="toc-item-num">1.1 </span>Example of how to compress bytes (e.g., JSON) to bzip2</a></div><div class="lev1 toc-item"><a href="#Pump" data-toc-modified-id="Pump-2"><span class="toc-item-num">2 </span>Pump</a></div>
# PMT/ADC
End of explanation
from IPython.display import display
import bz2
data = pd.concat(view.data)
data_json = data.to_json()
data_json_bz2 = bz2.compress(data_json)
data_from_json = pd.read_json(bz2.decompress(data_json_bz2), typ='series')
len(data_json), len(data_json_bz2)
Explanation: Example of how to compress bytes (e.g., JSON) to bzip2
End of explanation
import gobject
import gtk
import mr_box_peripheral_board
import mr_box_peripheral_board.ui.gtk.pump_ui as pump_ui
reload(pump_ui)
# `PumpControl` class uses threads. Need to initialize GTK to use threads.
# See [here][1] for more information.
#
# [1]: http://faq.pygtk.org/index.py?req=show&file=faq20.001.htp
gtk.gdk.threads_init()
# Create pump control view widget.
pump_control_view = pump_ui.PumpControl(None, frequency_hz=100, duration_s=10)
# Start pump automatically.
gobject.idle_add(pump_control_view.start)
# Create dialog window to wrap pump control view widget.
dialog = gtk.Dialog()
dialog.get_content_area().pack_start(pump_control_view.widget, True, True)
dialog.set_position(gtk.WIN_POS_MOUSE)
dialog.show_all()
dialog.run()
dialog.destroy()
Explanation: Pump
End of explanation |
15,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of how to use classification code for ship logbooks
Imports
Step1: Initialize classifier
Classification algorithm can be set to "Naive Bayes" or "Decision Tree"
Step2: Load Data, Clean data, and Perform Classification
This function loads, cleans, and classifies data
Options include
Step3: How to access data from outside of the classifier
Step4: Sample plots of data | Python Code:
from exploringShipLogbooks.config import non_slave_ships
from exploringShipLogbooks.classification import LogbookClassifier
Explanation: Example of how to use classification code for ship logbooks
Imports
End of explanation
cl = LogbookClassifier(classification_algorithm="Naive Bayes")
Explanation: Initialize classifier
Classification algorithm can be set to "Naive Bayes" or "Decision Tree"
End of explanation
cl.load_clean_and_classify(fuzz=False, export_csv=True)
Explanation: Load Data, Clean data, and Perform Classification
This function loads, cleans, and classifies data
Options include:
fuzz - boolean value for whether to perform fuzzy string matching on values or not
export_csv - boolean value to determine whether classification output is saved. The csv contains information for every log used in the code, with the following key:
0 = unclassified data
1 = data used as negative training data
2 = data used as positive validation data from cliwoc database
3 = slave voyages database data
4 = data classified as a non-slave ship
5 = data classified a slave ship
End of explanation
# data that was classified (unknown class before classification)
cl.unclassified_logs.head()
# data used for validation: 20% of slave voyage logs
cl.validation_set_2.head()
# data used for validation: logs that mention slaves in cliwoc data set
cl.validation_set_1.head()
# data used for training classifier
cl.training_data.head()
Explanation: How to access data from outside of the classifier
End of explanation
import exploringShipLogbooks
import os.path as op
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
import pandas as pd
# load un-cleaned slave_voyage_logs data
data_path = op.join(exploringShipLogbooks.__path__[0], 'data')
file_name = data_path + '/tastdb-exp-2010'
slave_voyage_logs = pd.read_pickle(file_name)
fig1, ax1 = plt.subplots()
ax1.hist(pd.concat([cl.validation_set_2, cl.training_data], ignore_index = True)['Year'])
ax1.set_xlabel('Year', fontsize = 30)
ax1.set_ylabel('Counts', fontsize = 30)
plt.xlim([1750, 1850])
for tick in ax1.xaxis.get_major_ticks():
tick.label.set_fontsize(26)
for tick in ax1.yaxis.get_major_ticks():
tick.label.set_fontsize(26)
fig1.set_size_inches(10, 8)
plt.savefig('slave_voyage_years.png')
fig2, ax2 = plt.subplots()
ax2.hist(pd.concat([cl.validation_set_1, cl.unclassified_logs], ignore_index = True)['Year'])
ax2.set_xlabel('Year', fontsize = 30)
ax2.set_ylabel('Counts', fontsize = 30)
plt.xlim([1750, 1850])
for tick in ax2.xaxis.get_major_ticks():
tick.label.set_fontsize(26)
for tick in ax2.yaxis.get_major_ticks():
tick.label.set_fontsize(26)
fig2.set_size_inches(11, 8)
plt.savefig('cliwoc_years.jpeg')
fractions = []
fract_dict = dict(slave_voyage_logs['national'].value_counts(normalize=True))
fractions = []
nats = []
for key in fract_dict:
if fract_dict[key] > 0.01:
nats.append(key)
fractions.append(fract_dict[key])
explode=[0.05] * len(fractions)
fig2, ax2 = plt.subplots()
fig2.set_size_inches(10,10)
matplotlib.rcParams['font.size'] = 30
matplotlib.pylab.pie(fractions, labels = nats, explode = explode)
plt.savefig('slave_voyages_nats.png')
fractions = []
fract_dict = dict(cl.cliwoc_data_all['Nationality'].value_counts(normalize=True))
fractions = []
nats = []
for key in fract_dict:
if fract_dict[key] > 0.01:
nats.append(key)
fractions.append(fract_dict[key])
explode=[0.05] * len(fractions)
fig2, ax2 = plt.subplots()
fig2.set_size_inches(10,10)
matplotlib.rcParams['font.size'] = 30
matplotlib.pylab.pie(fractions, labels = nats, explode = explode)
plt.savefig('cliwoc_nats.png')
Explanation: Sample plots of data
End of explanation |
15,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None,real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, shape=(None,z_dim), name='inputs_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1,h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=(tf.ones_like(d_logits_real) * (1 - smooth))))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=(tf.zeros_like(d_logits_fake))))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=(tf.ones_like(d_logits_fake))))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [x for x in t_vars if x.name[0:9] == 'generator']
d_vars = [x for x in t_vars if x.name[0:13] == 'discriminator']
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
15,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First order hold sampling of a lead compensator
\begin{equation}
F(s) = K\frac{s+b}{s+a}
\end{equation}
Step1: Sampling and taking the z-transform of the step-response
\begin{equation}
Y(z) = \frac{1}{\lambda} \left( \frac{z}{z-\mathrm{e}^{\lambda h}} - \frac{z}{z-1} \right).
\end{equation}
Dividing by the z-transform of the input signal
\begin{equation}
H(z) = \frac{z-1}{z}Y(z) = \frac{1}{\lambda} \left( \frac{ \mathrm{e}^{\lambda h} - 1 }{ z - \mathrm{e}^{\lambda h} } \right)
\end{equation}
Verifying for specific value of lambda | Python Code:
h, b, a,K = sy.symbols('h, b, a, K', real=True, positive=True)
s, z = sy.symbols('s, z', real=False)
F = K*(s+b)/(s+a)
U = F/s/s
Up = sy.apart(U, s)
Up
from sympy.integrals.transforms import inverse_laplace_transform
from sympy.abc import t
u = sy.simplify(inverse_laplace_transform(Up, s, t))
u
Explanation: First order hold sampling of a lead compensator
\begin{equation}
F(s) = K\frac{s+b}{s+a}
\end{equation}
End of explanation
import control.matlab as cm
lam = -0.5
h = 0.1
G = cm.tf([1], [1, -lam])
Gd = cm.c2d(G, h)
Hd = 1/lam * cm.tf([np.exp(lam*h)-1],[1, np.exp(lam*h)])
print(Gd)
print(Hd)
Explanation: Sampling and taking the z-transform of the step-response
\begin{equation}
Y(z) = \frac{1}{\lambda} \left( \frac{z}{z-\mathrm{e}^{\lambda h}} - \frac{z}{z-1} \right).
\end{equation}
Dividing by the z-transform of the input signal
\begin{equation}
H(z) = \frac{z-1}{z}Y(z) = \frac{1}{\lambda} \left( \frac{ \mathrm{e}^{\lambda h} - 1 }{ z - \mathrm{e}^{\lambda h} } \right)
\end{equation}
Verifying for specific value of lambda
End of explanation |
15,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Double inverted pendulum
In this Jupyter Notebook we illustrate the example DIP. This example illustrates how to use DAE models in do-mpc.
Open an interactive online Jupyter Notebook with this content on Binder
Step1: Model
In the following we will present the configuration, setup and connection between these blocks, starting with the model.
In this example we consider the double pendulum on a card as depicted below
Step2: Parameters
The model is configured with the following (certain) parameters
Step3: We furthermore introduce the following derived parameters to conveniently formulate the model equations below.
Step4: Euler-Lagrangian equations
The dynamics of the double pendulum can be derived from the Euler-Lagrangian equations, which yield
Step5: At this point we have two options. The typical approach would be to rewrite the system as
Step6: which makes it very convenient to formulate the ODE in terms of $x,u,z$
Step7: The only remaining challenge is to implement the relationship between $x,u$ and $z$, in the form of
Step8: with just a few lines of code we have defined the dynamics of the double pendulum!
Energy equations
The next step is to introduce new "auxiliary" expressions to do-mpc for the kinetic and potential energy of the system. This is required in this example, because we will need these expressions for the formulation of the MPC controller.
Introducing these expressions has the additional advantage that do-mpc saves and exports the calculated values, which is great for monitoring and debugging.
For the kinetic energy, we have
Step9: Finally, the model setup is completed
Step10: Controller
Next, the controller is configured.
First, an instance of the MPC class is generated with the prediction model defined above
Step11: We choose the prediction horizon n_horizon=100, set the robust horizon n_robust = 0. The time step t_step is set to $0.04s$ and parameters of the applied discretization scheme orthogonal collocation are as seen below
Step12: Objective
The control objective is to errect the double pendulum and to stabilize it in the up-up position. It is not straight-forward to formulate an objective which yields this result. Classical set-point tracking, e.g. with the set-point
Step13: Constraints
In the next step, the constraints of the control problem are set.
In this case, there is only an upper and lower bounds for the input.
Step14: We can now setup the controller.
Step15: Estimator
We assume, that all states can be directly measured (state-feedback)
Step16: Simulator
To create a simulator in order to run the MPC in a closed-loop, we create an instance of the do-mpc simulator which is based on the same model
Step17: For the simulation, we use the time step t_step as for the optimizer
Step18: Closed-loop simulation
For the simulation of the MPC configured for the DIP, we inspect the file main.py.
We define the initial state of the system and set for all parts of the closed-loop configuration
Step19: Note that mpc.set_initial_guess() is very important in this example. Also note that we didn't set the initial state at exactly $\pi$ which results in unfavorable numerical issues (it will work however).
Prepare visualization
For the visualization of the control performance, we first import matplotlib and change some basic settings
Step20: We use the plotting capabilities, which are included in do-mpc.
The mpc_graphics contain information like the current estimated state and the predicted trajectory of the states and inputs based on the solution of the control problem.
The sim_graphics contain the information about the simulated evaluation of the system.
Step21: For the example of the DIP we create a new function which takes as input the states (at a given time $k$) and returns the x and y positions of the two bars (the arms of the pendulum).
Step22: We then setup a graphic
Step23: Run open-loop
Before we test the closed loop case, lets plot one open-loop prediction to check how the resulting graphic looks like.
Step24: The first optimization will take rather long (4 seconds) but in the end we get
Step25: The open-loop prediction looks perfectly fine! We see that within the horizon the potential energy settles on a plateau greater than zero, while the kinetic energy becomes zero. This indicates our desired up-up position. Both angles seem to reach $2\pi$.
Run closed-loop
The closed-loop system is now simulated for 100 steps (and the ouput of the optimizer is suppressed)
Step26: Results
The next cell converts the results of the closed-loop MPC simulation into a gif (might take a few minutes) | Python Code:
import numpy as np
import sys
from casadi import *
# Add do_mpc to path. This is not necessary if it was installed via pip
sys.path.append('../../../')
# Import do_mpc package:
import do_mpc
Explanation: Double inverted pendulum
In this Jupyter Notebook we illustrate the example DIP. This example illustrates how to use DAE models in do-mpc.
Open an interactive online Jupyter Notebook with this content on Binder:
The example consists of the three modules template_model.py, which describes the system model, template_mpc.py, which defines the settings for the control and template_simulator.py, which sets the parameters for the simulator.
The modules are used in main.py for the closed-loop execution of the controller.
We start by importing basic modules and do-mpc.
End of explanation
model_type = 'continuous' # either 'discrete' or 'continuous'
model = do_mpc.model.Model(model_type)
Explanation: Model
In the following we will present the configuration, setup and connection between these blocks, starting with the model.
In this example we consider the double pendulum on a card as depicted below:
<img src="dip_sketch.svg" width="60%">
The system is described in terms of its horizontal position $x$ and the two angles $\theta$, where $\theta_1 = \theta_2 = 0$ denotes the upright position.
We will formulate a continuous dynamic model for this system and start by initiating a do-mpc Model instance:
End of explanation
m0 = 0.6 # kg, mass of the cart
m1 = 0.2 # kg, mass of the first rod
m2 = 0.2 # kg, mass of the second rod
L1 = 0.5 # m, length of the first rod
L2 = 0.5 # m, length of the second rod
g = 9.80665 # m/s^2, Gravity
Explanation: Parameters
The model is configured with the following (certain) parameters:
End of explanation
l1 = L1/2 # m,
l2 = L2/2 # m,
J1 = (m1 * l1**2) / 3 # Inertia
J2 = (m2 * l2**2) / 3 # Inertia
h1 = m0 + m1 + m2
h2 = m1*l1 + m2*L1
h3 = m2*l2
h4 = m1*l1**2 + m2*L1**2 + J1
h5 = m2*l2*L1
h6 = m2*l2**2 + J2
h7 = (m1*l1 + m2*L1) * g
h8 = m2*l2*g
Explanation: We furthermore introduce the following derived parameters to conveniently formulate the model equations below.
End of explanation
pos = model.set_variable('_x', 'pos')
theta = model.set_variable('_x', 'theta', (2,1))
dpos = model.set_variable('_x', 'dpos')
dtheta = model.set_variable('_x', 'dtheta', (2,1))
u = model.set_variable('_u', 'force')
Explanation: Euler-Lagrangian equations
The dynamics of the double pendulum can be derived from the Euler-Lagrangian equations, which yield:
\begin{align}
h_1\ddot{x}+h_2\ddot{\theta}_1\cos(\theta_1)+h_3\ddot{\theta}_2\cos(\theta_2)
&= (h_2\dot{\theta}_1^{2}\sin(\theta_1) + h_3\dot{\theta}_2^{2}\sin(\theta_2) + u)\
h_2\cos(\theta_1)\ddot{x} + h_4\ddot{\theta}_1 + h_5\cos(\theta_1-\theta_2)\ddot{\theta}_2
&= (h_7\sin(\theta_1) - h_5\dot{\theta}_2^{2}\sin(\theta_1-\theta_2))\
h_3\cos(\theta_2)\ddot{x} + h_5\cos(\theta_1-\theta_2)\ddot{\theta}_1 + h_6\ddot{\theta}_2
&= (h_5\dot{\theta}_1^{2}\sin(\theta_1-\theta_2) + h_8\sin(\theta_2))
\end{align}
we introduce the states
$$x=[x,\theta_1, \theta_2, \dot{x}, \dot{\theta}_1, \dot{\theta}_2]^T$$
and input:
$$ u = f$$
which is the horizontal force applied to the cart.
End of explanation
ddpos = model.set_variable('_z', 'ddpos')
ddtheta = model.set_variable('_z', 'ddtheta', (2,1))
Explanation: At this point we have two options. The typical approach would be to rewrite the system as:
$$M(x) \dot x = A(x) x + B u$$
where it can be shown that
$$ \det(M) > 0, \forall x$$
such that we can obtain the ODE:
$$\dot x = M(x)^{-1}(A(x)x + B u)$$
do-mpc fully supports this option but it requires some nasty reformulations of the above equations and yields a very complex expression for the ODE.
Instead we suggest ...
Differential algebraic equation (DAE)
We introduce new algebraic states
$$ z=[\ddot{x}, \ddot{\theta}_1, \ddot{\theta}_2]^T$$
End of explanation
model.set_rhs('pos', dpos)
model.set_rhs('theta', dtheta)
model.set_rhs('dpos', ddpos)
model.set_rhs('dtheta', ddtheta)
Explanation: which makes it very convenient to formulate the ODE in terms of $x,u,z$:
$$ \dot{x} = [\dot{x}, \dot{\theta}_1, \dot{\theta}_2, \ddot{x}, \ddot{\theta}_1, \ddot{\theta}_2]^T$$
End of explanation
euler_lagrange = vertcat(
# 1
h1*ddpos+h2*ddtheta[0]*cos(theta[0])+h3*ddtheta[1]*cos(theta[1])
- (h2*dtheta[0]**2*sin(theta[0]) + h3*dtheta[1]**2*sin(theta[1]) + u),
# 2
h2*cos(theta[0])*ddpos + h4*ddtheta[0] + h5*cos(theta[0]-theta[1])*ddtheta[1]
- (h7*sin(theta[0]) - h5*dtheta[1]**2*sin(theta[0]-theta[1])),
# 3
h3*cos(theta[1])*ddpos + h5*cos(theta[0]-theta[1])*ddtheta[0] + h6*ddtheta[1]
- (h5*dtheta[0]**2*sin(theta[0]-theta[1]) + h8*sin(theta[1]))
)
model.set_alg('euler_lagrange', euler_lagrange)
Explanation: The only remaining challenge is to implement the relationship between $x,u$ and $z$, in the form of:
$$g(x,u,z)=0$$
with do-mpc this is easily achieved:
End of explanation
E_kin_cart = 1 / 2 * m0 * dpos**2
E_kin_p1 = 1 / 2 * m1 * (
(dpos + l1 * dtheta[0] * cos(theta[0]))**2 +
(l1 * dtheta[0] * sin(theta[0]))**2) + 1 / 2 * J1 * dtheta[0]**2
E_kin_p2 = 1 / 2 * m2 * (
(dpos + L1 * dtheta[0] * cos(theta[0]) + l2 * dtheta[1] * cos(theta[1]))**2 +
(L1 * dtheta[0] * sin(theta[0]) + l2 * dtheta[1] * sin(theta[1]))**
2) + 1 / 2 * J2 * dtheta[0]**2
E_kin = E_kin_cart + E_kin_p1 + E_kin_p2
E_pot = m1 * g * l1 * cos(
theta[0]) + m2 * g * (L1 * cos(theta[0]) +
l2 * cos(theta[1]))
model.set_expression('E_kin', E_kin)
model.set_expression('E_pot', E_pot)
Explanation: with just a few lines of code we have defined the dynamics of the double pendulum!
Energy equations
The next step is to introduce new "auxiliary" expressions to do-mpc for the kinetic and potential energy of the system. This is required in this example, because we will need these expressions for the formulation of the MPC controller.
Introducing these expressions has the additional advantage that do-mpc saves and exports the calculated values, which is great for monitoring and debugging.
For the kinetic energy, we have:
\begin{align}
E_{\text{kin.,cart}} &= \frac{1}{2} m \dot{x}^{2}\
E_{\text{kin.,}p_1} &= \frac{1}{2} m_1 (
(\dot{x} + l_1 \dot{\theta}1 \cos(\theta_1))^{2} +
(l_1 \dot{\theta}_1 \sin(\theta_1))^{2}) + \frac{1}{2} J_1 \dot{\theta}_1^{2}\
E{\text{kin,}p_2} &= \frac{1}{2} m_2 (
(\dot{x} + L_1 \dot{\theta}_1 \cos(\theta_1) + l_2 \dot{\theta}_2 \cos(\theta_2))^{2} \
&+ (L_1 \dot{\theta}_1 \sin(\theta_1) + l_2 \dot{\theta}_2 \sin(\theta_2))^2) + \frac{1}{2} J_2 \dot{\theta}_1^{2}
\end{align}
and for the potential energy:
$$
E_{\text{pot.}} = m_1 g l_1 \cos(
\theta_1) + m_2 g (L_1 \cos(\theta_1) +
l_2 \cos(\theta_2))
$$
It only remains to formulate the expressions and set them to the model:
End of explanation
# Build the model
model.setup()
Explanation: Finally, the model setup is completed:
End of explanation
mpc = do_mpc.controller.MPC(model)
Explanation: Controller
Next, the controller is configured.
First, an instance of the MPC class is generated with the prediction model defined above:
End of explanation
setup_mpc = {
'n_horizon': 100,
'n_robust': 0,
'open_loop': 0,
't_step': 0.04,
'state_discretization': 'collocation',
'collocation_type': 'radau',
'collocation_deg': 3,
'collocation_ni': 1,
'store_full_solution': True,
# Use MA27 linear solver in ipopt for faster calculations:
'nlpsol_opts': {'ipopt.linear_solver': 'mumps'}
}
mpc.set_param(**setup_mpc)
Explanation: We choose the prediction horizon n_horizon=100, set the robust horizon n_robust = 0. The time step t_step is set to $0.04s$ and parameters of the applied discretization scheme orthogonal collocation are as seen below:
End of explanation
mterm = model.aux['E_kin'] - model.aux['E_pot'] # terminal cost
lterm = model.aux['E_kin'] - model.aux['E_pot'] # stage cost
mpc.set_objective(mterm=mterm, lterm=lterm)
# Input force is implicitly restricted through the objective.
mpc.set_rterm(force=0.1)
Explanation: Objective
The control objective is to errect the double pendulum and to stabilize it in the up-up position. It is not straight-forward to formulate an objective which yields this result. Classical set-point tracking, e.g. with the set-point:
$$
\theta_s = [0,0,0]
$$
and a quadratic cost:
$$
J = \sum_{k=0}^N (\theta-\theta_s)^2
$$
is known to work very poorly. Clearly, the problem results from the fact that $\theta_s = 2\pi n,\ n\in\mathbb{Z}$ is also a valid solution.
Instead we will use an energy-based formulation for the objective. If we think of energy in terms of potential and kinetic energy it is clear that we want to maximize the potential energy (up-up position) and minimize the kinetic energy (stabilization).
Since we have already introduced the expressions for the potential and kinetic energy in the model, we can now simply reuse these expresssions for the fomulation of the objective function, as shown below:
End of explanation
mpc.bounds['lower','_u','force'] = -4
mpc.bounds['upper','_u','force'] = 4
Explanation: Constraints
In the next step, the constraints of the control problem are set.
In this case, there is only an upper and lower bounds for the input.
End of explanation
mpc.setup()
Explanation: We can now setup the controller.
End of explanation
estimator = do_mpc.estimator.StateFeedback(model)
Explanation: Estimator
We assume, that all states can be directly measured (state-feedback):
End of explanation
simulator = do_mpc.simulator.Simulator(model)
Explanation: Simulator
To create a simulator in order to run the MPC in a closed-loop, we create an instance of the do-mpc simulator which is based on the same model:
End of explanation
params_simulator = {
# Note: cvode doesn't support DAE systems.
'integration_tool': 'idas',
'abstol': 1e-10,
'reltol': 1e-10,
't_step': 0.04
}
simulator.set_param(**params_simulator)
simulator.setup()
Explanation: For the simulation, we use the time step t_step as for the optimizer:
End of explanation
simulator.x0['theta'] = 0.99*np.pi
x0 = simulator.x0.cat.full()
mpc.x0 = x0
estimator.x0 = x0
mpc.set_initial_guess()
Explanation: Closed-loop simulation
For the simulation of the MPC configured for the DIP, we inspect the file main.py.
We define the initial state of the system and set for all parts of the closed-loop configuration:
End of explanation
import matplotlib.pyplot as plt
plt.ion()
from matplotlib import rcParams
rcParams['text.usetex'] = False
rcParams['axes.grid'] = True
rcParams['lines.linewidth'] = 2.0
rcParams['axes.labelsize'] = 'xx-large'
rcParams['xtick.labelsize'] = 'xx-large'
rcParams['ytick.labelsize'] = 'xx-large'
Explanation: Note that mpc.set_initial_guess() is very important in this example. Also note that we didn't set the initial state at exactly $\pi$ which results in unfavorable numerical issues (it will work however).
Prepare visualization
For the visualization of the control performance, we first import matplotlib and change some basic settings:
End of explanation
mpc_graphics = do_mpc.graphics.Graphics(mpc.data)
Explanation: We use the plotting capabilities, which are included in do-mpc.
The mpc_graphics contain information like the current estimated state and the predicted trajectory of the states and inputs based on the solution of the control problem.
The sim_graphics contain the information about the simulated evaluation of the system.
End of explanation
def pendulum_bars(x):
x = x.flatten()
# Get the x,y coordinates of the two bars for the given state x.
line_1_x = np.array([
x[0],
x[0]+L1*np.sin(x[1])
])
line_1_y = np.array([
0,
L1*np.cos(x[1])
])
line_2_x = np.array([
line_1_x[1],
line_1_x[1] + L2*np.sin(x[2])
])
line_2_y = np.array([
line_1_y[1],
line_1_y[1] + L2*np.cos(x[2])
])
line_1 = np.stack((line_1_x, line_1_y))
line_2 = np.stack((line_2_x, line_2_y))
return line_1, line_2
Explanation: For the example of the DIP we create a new function which takes as input the states (at a given time $k$) and returns the x and y positions of the two bars (the arms of the pendulum).
End of explanation
%%capture
fig = plt.figure(figsize=(16,9))
ax1 = plt.subplot2grid((4, 2), (0, 0), rowspan=4)
ax2 = plt.subplot2grid((4, 2), (0, 1))
ax3 = plt.subplot2grid((4, 2), (1, 1))
ax4 = plt.subplot2grid((4, 2), (2, 1))
ax5 = plt.subplot2grid((4, 2), (3, 1))
ax2.set_ylabel('$E_{kin}$ [J]')
ax3.set_ylabel('$E_{pot}$ [J]')
ax4.set_ylabel('Angle [rad]')
ax5.set_ylabel('Input force [N]')
# Axis on the right.
for ax in [ax2, ax3, ax4, ax5]:
ax.yaxis.set_label_position("right")
ax.yaxis.tick_right()
if ax != ax5:
ax.xaxis.set_ticklabels([])
ax5.set_xlabel('time [s]')
mpc_graphics.add_line(var_type='_aux', var_name='E_kin', axis=ax2)
mpc_graphics.add_line(var_type='_aux', var_name='E_pot', axis=ax3)
mpc_graphics.add_line(var_type='_x', var_name='theta', axis=ax4)
mpc_graphics.add_line(var_type='_u', var_name='force', axis=ax5)
ax1.axhline(0,color='black')
bar1 = ax1.plot([],[], '-o', linewidth=5, markersize=10)
bar2 = ax1.plot([],[], '-o', linewidth=5, markersize=10)
ax1.set_xlim(-1.8,1.8)
ax1.set_ylim(-1.2,1.2)
ax1.set_axis_off()
fig.align_ylabels()
fig.tight_layout()
Explanation: We then setup a graphic:
End of explanation
u0 = mpc.make_step(x0)
Explanation: Run open-loop
Before we test the closed loop case, lets plot one open-loop prediction to check how the resulting graphic looks like.
End of explanation
line1, line2 = pendulum_bars(x0)
bar1[0].set_data(line1[0],line1[1])
bar2[0].set_data(line2[0],line2[1])
mpc_graphics.plot_predictions()
mpc_graphics.reset_axes()
fig
Explanation: The first optimization will take rather long (4 seconds) but in the end we get:
EXIT: Optimal Solution Found.
which tells us that we found an optimal solution. Note that follow-up optimizations take around 100 ms due to warmstarting.
We can visualize the open-loop prediction with:
End of explanation
%%capture
# Quickly reset the history of the MPC data object.
mpc.reset_history()
n_steps = 100
for k in range(n_steps):
u0 = mpc.make_step(x0)
y_next = simulator.make_step(u0)
x0 = estimator.make_step(y_next)
Explanation: The open-loop prediction looks perfectly fine! We see that within the horizon the potential energy settles on a plateau greater than zero, while the kinetic energy becomes zero. This indicates our desired up-up position. Both angles seem to reach $2\pi$.
Run closed-loop
The closed-loop system is now simulated for 100 steps (and the ouput of the optimizer is suppressed):
End of explanation
from matplotlib.animation import FuncAnimation, FFMpegWriter, ImageMagickWriter
# The function describing the gif:
x_arr = mpc.data['_x']
def update(t_ind):
line1, line2 = pendulum_bars(x_arr[t_ind])
bar1[0].set_data(line1[0],line1[1])
bar2[0].set_data(line2[0],line2[1])
mpc_graphics.plot_results(t_ind)
mpc_graphics.plot_predictions(t_ind)
mpc_graphics.reset_axes()
anim = FuncAnimation(fig, update, frames=n_steps, repeat=False)
gif_writer = ImageMagickWriter(fps=20)
anim.save('anim_dip.gif', writer=gif_writer)
Explanation: Results
The next cell converts the results of the closed-loop MPC simulation into a gif (might take a few minutes):
End of explanation |
15,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align
Step3: Aunque no se especifique explícitamente, cada variable sí tiene un tipo asociada a ella. El tipo es extraido del valor que le fue asignado.
Step4: Si asignamos un nuevo valor a una variable, su tipo puede cambiar.
Step5: Si tratamos de unsar una variable que no ha sido definida obtenemo un mensaje de error (NameError)
Step6: Tipos Fundamentales
Step7: Funciones de relacionadas con Tipos
El módulo types contiene definiciones de nombres de tipo que pueden ser usadas para testear si las variables son de un cierto tipo
Step8: Podemos también usar el método isinstance para testear tipos de variables
Step9: Conversión de Tipo
Step10: Un número complejo no puede ser convertido a un número flotante o a un entero. Necesitamos usar z.real, o bien z.imag, para extraer la parte que deseamos del número complejo z
Step11: Operadores y comparaciones
La mayoría de los operadores y las comparaciones en Python funcionan como one esperaría
Step12: Los operadores booleanos se escriben como palabras
Step13: Operadores de comparación >, <, >= (mayor o igual), <= (menor o igual), == igualdad, es identico.
Step14: Tipos compuestos
Step15: Podemos aislar un carácter en una cadena usando []
Step16: Atención usuarios de MATLAB
Step17: Si omitimos desde o bien hasta de [desde
Step18: Podemos también definir el tamaño del paso usando la sintaxis [desde
Step19: Esta técnica es llamada slicing ("rebanado"). Puede leer más sobre la sintaxis aquí http
Step20: Listas
Listas son muy similares a las cadenas, excepto que cada elemento puede ser de un tipo diferente.
La sintaxis para crear listas en Python es [..., ..., ...]
Step21: Podemos usar las mismas técnicas de "rebanado" que usamos en el caso de cadenas para manipular listas
Step22: Atención usuarios de MATLAB
Step23: Los elementos en una lista no requieren ser del mismo tipo
Step24: Las listas en Python pueden ser inhomogéneas y arbitrariamente anidadas
Step25: Las listas juegan un rol muy importante en Python y son, por ejemplo, usadas en bucles y otras estructuras de control de flujo (discutidas más abajo). Existan muchas funciones convenientes para generar listas de varios tipos, por ejemplo la función range
Step26: Agregando, insertando, modificando, y removiendo elementos de listas
Step27: Podemos modificar listas asignando nuevos valores a los elementos de la lista. En lenguaje técnico se dice que la lista es mutable.
Step28: Insertar un elemento en una posición específica insert
Step29: Eliminar el primer elemento con un valor específico usando 'remove'
Step30: Eliminar un elemento en una posición específica usando del
Step31: Podemos separar una tupla asignandola a una lista de variables separadas por coma
Step32: Si intentamos asignar un nuevo valor a un elemento de una tupla obtenemos un error
Step33: Dictionarios
Dictionarios son también como listas, excepto que cada elemento es un par clave-valor. La sintaxsis de los diccionarios es {clave1
Step34: Control de flujo
Sentencias condicionales
Step35: Aquí encontramos por primera un aspecto pecular e inusual del lenguaje Python
Step36: Ciclos
En Python, los ciclos (loops) puede ser programados de varias maneras diferentes. La forma más común es usando un cicle for, que se usa junto con objetos iterables, como por ejemplos las listas. La sintaxis básica es
Step37: El ciclo for itera sobre los elementos de la lista suministrada y ejecuta el bloque suministrado una vez para cada elemento. Cualquier tipo de lista puede ser usada para un ciclo for. Por ejemplo
Step38: Nota
Step39: Para iterar sobre pares clave-valor en un diccionario
Step40: Algunas veces es útil tener acceso a los índices de los valores mientras se itera sobre una lista. Podemos usar la función enumerate para esto
Step41: Listas
Step42: Ciclos while
Step43: Note que el comando print("listo") no es parte del cuerpo del ciclo while, debido a su indentación.
Funciones
En Python una función es definida usando la palabra clave def, seguida de un nombre para la función, una variable entre paréntesis (), y el símbolo de dos puntos
Step45: En forma opcional, pero muy recomendada, podemos definir un "docstring", que es una descripción del propósito y comportamiento de la función. El docstring debería ser incluido directamente después de la definición de la función, antes del código en el cuerpo de la función.
Step47: Funciones que retornan un valor usan la palabra clave return
Step49: Podemos retornar múltiples valores desde una función usando las tuplas (ver más arriba)
Step50: Argumentos por defecto y argumentos de palabra clave
En la definición de una función, podemos asignar valores por defecto a los argumentos de la función
Step51: Si no suministramos un valor para el argumento debug al llamar a la función mifunc se considera el valor definido por defecto
Step52: Si listamos explícitamente el nombre de los argumentos al llamar una función, ellos no necesitan estar en el mismo orden usando en la definición de la función. Esto es llamado argumentos de palabra clave (keyword), y son a menudo muy útiles en funciones que requieren muchos argumentos opcionales.
Step53: Funciones sin nombre (funciones lambda)
En Python podemos también crear funciones sin nombre, usando la palabra clave lambda
Step54: Esta técnica es útil, por ejemplo, cuando queremos pasar una función simple como argumento de otra función, como en este caso
Step58: Clases
Las clases son una característica clave de la programación orientada al objeto. Una clase es una estructura para representar un objeto y las operaciones que pueden ser realizadas sobre el objeto.
En Python una clase puede contener atributos (variables) y métodos (funciones).
En Python una clase es definida casi como una función, pero usando la palabra clave class, y la definición de la clase usualmente contiene algunas definiciones de métodos (una función en una clase).
Cada método de una clase debería tener un argumento self como su primer argumento. Este objeto es una autoreferencia.
Algunos nombres de métodos de clases tienen un significado especial, por ejemplo
Step59: Para crear una nuva instancia de una clase
Step60: Para invocar un método en la instancia de clase p
Step61: Note que llamar a métodos de clases puede modificar el estado de esa instancia de clase particular, pero no afecta otras instancias de la clase o alguna otra variable global.
Esto es una de las cosas buenas de un diseño orientado al objeto
Step62: Esto incluye el módulo completo y lo deja disponible para su uso en el programa. Por ejemplo, podemos escribir
Step63: Alternativamente, podemos elegir importar todos los símbolos (funciones y variables) en un módulo al espacio de nombres (namespace) actual (de modo que no necesitemos usar el prefijo "math." cada vez que usemos algo del módulo math
Step64: Esta forma de proceder puede ser muy conveniente, pero en programas largos que incluyen muchos módulos es a menudo una buena idea mantener los símbolos de cada módulo en sus propios espacios de nombres, usando import math. Esto elimina potenciales confusiones con eventuales colisiones de nombres.
Como una tercera alternativa, podemos importar sólo algunos símbolos seleccionados desde un módulo listando explícitamente aquellos símbolos que deseamos importar, en lugar de usar el carácter comodín *
Step65: Mirando qué contiene un módulo, y su documentación
Luego que se ha cargado un módulo, podemos listar los símbolos que éste provee usando la función dir
Step66: Usando la función help podemos obtener una descripción de cada función (casi... no todas las funciones tienen docstrings, como se les llama técnicamente. Sin embargo, la mayoría de las funciones están documentadas de esta forma).
Step67: También podemos usar la función help directamente sobre los módulos
Step68: Podemos importar el módulo mimodulo a un programa Python usando import
Step69: Use help(module) para obtener un resumen de lo que suministra el módulo
Step70: Excepciones
En Python los errores son manejados con una construcción especial de lenguaje llamada "Exceptions" (excepciones). Cuando ocurre un error, una excepción puede ser hecha, que interrumpe el flujo normal del programa y retorna a algún otro lugar del código donde se definan los comandos try-except más cercanos.
Para generar una excepción podemos usar el comando raise, que toma un argumento que debe ser una instancia de la clase BaseExpection o una clase derivada de ella.
Step71: Un úso típico de las excepciones es para abortar funciones cuando ocurre algún error, por ejemplo
Step72: Para obtener información sobre un error, podemos accesar la instancia de clase Exception que describe la excepción usando por ejemplo
Step73: Lectura adicional
http | Python Code:
# asignaciones de variables
x = 1.0
mi_variable = 12.2
Explanation: <table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align: center;"> Curso de Python para Ingenieros Mecánicos </h1>
<h3 style="text-align: center;"> Por: Eduardo Vieira</h3>
<br>
<br>
<h1 style="text-align: center;"> Programación básica en Python y el Jupyter Notebook </h1>
<br>
Markdown
Texto en cursiva, negrita o cursiva negrita.
Puedes crear listas numeradas o sin numerar:
Un elemento sun numerar
Sub elemento
Sub sub elemento
Sub elemento
- Una cosa
- Otra cosa
Segundo elemento sin numerar
Sub elemento
Tercero
Sub elemento
Numerada:
Punto uno
Sub punto uno
Sub punto dos
Punto dos
Punto tres
Super linea horizontal
Una cita (blockquote)
Hermoso es mejor que feo.
Explícito es mejor que implícito.
Simple es mejor que complejo.
Complejo es mejor que complicado.
Plano es mejor que anidado.
Escaso es mejor que denso.
Cuenta la legibilidad.
Los casos especiales no son lo suficientemente especial como para romper las reglas.
Aunque sentido práctico supera pureza.
Los errores nunca debe pasar en silencio.
A menos que explícitamente silenciados.
Ante la ambigüedad, rechaza la tentación de adivinar.
Debería haber una - y preferiblemente sólo una - manera obvia de hacerlo.
Aunque esa manera puede no ser obvia al principio a menos que seas holandés.
Ahora es mejor que nunca.
Aunque nunca es a menudo mejor que la * justo * ahora.
Si la implementación es difícil de explicar, es una mala idea.
Si la implementación es fácil de explicar, puede ser una buena idea.
Namespaces son una gran idea de fanfarria - Vamos a hacer más de esos!
Zen de Python
Enlaces:
Google
Títulos:
Título 1
Título 2
Titulo 3
Código
Puedes escribir código que no se ejecutará:
def f(x):
Calcula el cuadrado de un número
return x**2
O
python
def f(x):
Calcula el cuadrado de un número
return x**2
Otro ejemplo en C
if (i=0; i<n; i++) {
printf("hello %d\n", i);
x += 4;
}
O
C
if (i=0; i<n; i++) {
printf("hello %d\n", i);
x += 4;
}
Bonitas ecuaciones en Latex
Gracias a MathJax, podemos icluir fómulas escritas en Latex como
$e^{i\pi} + 1 = 0$ o
$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$
Las ecuaciones en linea ven entre $:
$e^{i\pi} + 1 = 0$
Expresiones en su propia linea van entre $$:
$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$
Tablas
| Esta | es |
|------|------|
| una | tabla|
|$\theta$ |$x^2$ |
HTML
tambien podemos escribir en HTML
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
Imagenes y videos
Imagen:
o con HTML
<img src="./images/ucv.png" />
Video
<video controls src="./images/big_buck_bunny.mp4" />
Introducción a la programación en Python
Variables y tipos
Nombres de símbolos
Los nombres de las variables en Python pueden contener los caracteres a-z, A-Z, 0-9 y algunos caracteres especiales como _. Los nombres de variables normales deben comenzar con una letra.
Por convención, los nombres de las variables comienzan con letra minúscula, mientras que los nombres de las clases comienzan con una letra mayúscula.
Además, existen algunos palabras claves Python que no pueden ser usados como nombres de variables. Éstas son:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Nota: Atención con la palabra lambda, que podría fácilmente ser un nombre de variable natural en un programa científico. Sin embargo, como es una palabra clave, no puede ser usado como nombre de una variable.
Asignaciones
El operador para asignar valores en Python es el signo igual (=). Python es un lenguage de escritura dinámica, de modo que no necesitamos especificar el tipo de una variable cuando la creamos.
Al asignar un valor a una variable nueva se crea esa variable:
End of explanation
type(x)
Explanation: Aunque no se especifique explícitamente, cada variable sí tiene un tipo asociada a ella. El tipo es extraido del valor que le fue asignado.
End of explanation
x = 1
type(x)
Explanation: Si asignamos un nuevo valor a una variable, su tipo puede cambiar.
End of explanation
print(y)
Explanation: Si tratamos de unsar una variable que no ha sido definida obtenemo un mensaje de error (NameError):
End of explanation
# enteros
x = 1
type(x)
# flotantes
x = 1.0
type(x)
# booleanos
b1 = True
b2 = False
type(b1)
# números complejos: note que se usa `j` para especificar la parte imaginaria
x = 1.0 - 1.0j
type(x)
print(x)
print(x.real, x.imag)
Explanation: Tipos Fundamentales
End of explanation
import types
# imprime todos los tipos definidos en el módulo `types`
print(dir(types))
x = 1.0
# verifica si la variable x es flotante
type(x) is float
# verifica si la variable x es un entero
type(x) is int
Explanation: Funciones de relacionadas con Tipos
El módulo types contiene definiciones de nombres de tipo que pueden ser usadas para testear si las variables son de un cierto tipo:
End of explanation
isinstance(x, float)
Explanation: Podemos también usar el método isinstance para testear tipos de variables:
End of explanation
x = 1.5
print(x, type(x))
x = int(x)
print(x, type(x))
z = complex(x)
print(z, type(z))
x = float(z)
Explanation: Conversión de Tipo
End of explanation
y = bool(z.real)
print(z.real, " -> ", y, type(y))
y = bool(z.imag)
print(z.imag, " -> ", y, type(y))
Explanation: Un número complejo no puede ser convertido a un número flotante o a un entero. Necesitamos usar z.real, o bien z.imag, para extraer la parte que deseamos del número complejo z:
End of explanation
1 + 2, 1 - 2, 1 * 2, 1 / 2
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0
# División entera de dos númenos flotantes
3.0 // 2.0
# Atención! El operador de potencia en Python no es ^, sino **
2 ** 2
Explanation: Operadores y comparaciones
La mayoría de los operadores y las comparaciones en Python funcionan como one esperaría:
Operadores aritméticos +, -, *, /, // (división entera), '**' potencia
End of explanation
True and False
not False
True or False
Explanation: Los operadores booleanos se escriben como palabras: and, not, or.
End of explanation
2 > 1, 2 < 1
2 > 2, 2 < 2
2 >= 2, 2 <= 2
# igualdad
[1,2] == [1,2]
# ¿objetos identicos?
l1 = l2 = [1,2]
l1 is l2
Explanation: Operadores de comparación >, <, >= (mayor o igual), <= (menor o igual), == igualdad, es identico.
End of explanation
s = "Hola mundo"
type(s)
# longitud de la cadena: el número de caracteres que contiene
len(s)
# reemplaza una subcadena de una cadena por cadena
s2 = s.replace("mundo", "universo")
print(s2)
Explanation: Tipos compuestos: Cadenas, listas y diccionarios
Cadenas
Las cadenas son el tipo de variables que es usado para almacenar mensajes de texto.
End of explanation
s[0]
Explanation: Podemos aislar un carácter en una cadena usando []:
End of explanation
s[0:5]
Explanation: Atención usuarios de MATLAB: el indexado comienza en 0!
Podemos extraer una parte de una cadena usando la sintaxis [desde:hasta], que extrae caracteres entre los índices desde y hasta:
End of explanation
s[:5]
s[6:]
s[:]
Explanation: Si omitimos desde o bien hasta de [desde:hasta], por defecto se entiende que se refiere al comienzo y/o al fin de la cadena, respectivamente:
End of explanation
s[::1]
s[::2]
Explanation: Podemos también definir el tamaño del paso usando la sintaxis [desde:hasta:paso] (el valor por defecto de paso es 1, como ya vismo):
End of explanation
print("uno", "dos", "tres") # El comando print puede desplegar varias cadenas
print("uno", 1.0, False, -1j) # El comendo print convierte todos los argumentos a cadenas
print("uno" + "dos" + "tres") # cadenas "sumadas" con + son contatenadas sin espacio entre ellas
print("valor = %f" % 1.0) # podemos usar formateo de cadenas en el estilo del lenguaje C
# este formateo crea una cadena
s2 = "valor1 = %.2f. valor2 = %d" % (3.1415, 1.5)
print(s2)
# forma alternativa, más intuitiva para formatear una cadena
s3 = 'valor1 = {0}, valor2 = {1}'.format(3.1415, 1.5)
print(s3)
Explanation: Esta técnica es llamada slicing ("rebanado"). Puede leer más sobre la sintaxis aquí http://pyspanishdoc.sourceforge.net/lib/built-in-funcs.html y aquí (en inglés) http://docs.python.org/release/2.7.3/library/functions.html?highlight=slice#slice
Python tiene un rico conjunto de funciones para procesar texto. Ver por ejemplo http://docs.python.org/2/library/string.html (en inglés) para más información.
Ejemplos de formateo de cadenas
End of explanation
l = [1,2,3,4]
print(type(l))
print(l)
Explanation: Listas
Listas son muy similares a las cadenas, excepto que cada elemento puede ser de un tipo diferente.
La sintaxis para crear listas en Python es [..., ..., ...]:
End of explanation
print(l)
print(l[1:3])
print(l[::2])
Explanation: Podemos usar las mismas técnicas de "rebanado" que usamos en el caso de cadenas para manipular listas:
End of explanation
l[0]
Explanation: Atención usuarios de MATLAB: el indexado comienza en 0!
End of explanation
l = [1, 'a', 1.0, 1-1j]
print(l)
Explanation: Los elementos en una lista no requieren ser del mismo tipo:
End of explanation
lista_anidada = [1, [2, [3, [4, [5]]]]]
lista_anidada
Explanation: Las listas en Python pueden ser inhomogéneas y arbitrariamente anidadas:
End of explanation
desde = 10
hasta = 30
paso = 2
range(desde, hasta, paso)
# en Python 3 range genera un interador, que puede ser convertido a una lista usando 'list(...)'. Esto no tiene efecto en Python 2
list(range(desde, hasta, paso))
list(range(-10, 10))
s
# convierte una cadena a una lista, por conversión de tipo:
s2 = list(s)
s2
# ordenando listas
s2.sort()
print(s2)
Explanation: Las listas juegan un rol muy importante en Python y son, por ejemplo, usadas en bucles y otras estructuras de control de flujo (discutidas más abajo). Existan muchas funciones convenientes para generar listas de varios tipos, por ejemplo la función range:
End of explanation
# crea una nueva lista vacía
l = []
# agrega un elemento usando `append`
l.append("A")
l.append("d")
l.append("d")
print(l)
Explanation: Agregando, insertando, modificando, y removiendo elementos de listas
End of explanation
l[1] = "p"
l[2] = "p"
print(l)
l[1:3] = ["d", "d"]
print(l)
Explanation: Podemos modificar listas asignando nuevos valores a los elementos de la lista. En lenguaje técnico se dice que la lista es mutable.
End of explanation
l.insert(0, "i")
l.insert(1, "n")
l.insert(2, "s")
l.insert(3, "e")
l.insert(4, "r")
l.insert(5, "t")
print(l)
Explanation: Insertar un elemento en una posición específica insert
End of explanation
l.remove("A")
print(l)
Explanation: Eliminar el primer elemento con un valor específico usando 'remove'
End of explanation
punto = (10, 20)
print(punto, type(punto))
punto = 10, 20
print(punto, type(punto))
Explanation: Eliminar un elemento en una posición específica usando del:
Puede introducir help(list) para más detalles, o leer la documentación en la red
Tuplas
Tuplas son similares a las listas, excepto que ellas no pueden ser modificadas una vez creadas, es decir, son inmutables.
En Python, las tuplas son creadas usando la sintaxis (..., ..., ...), o incluso ..., ...:
End of explanation
x, y = punto
print("x =", x)
print("y =", y)
Explanation: Podemos separar una tupla asignandola a una lista de variables separadas por coma:
End of explanation
punto[0] = 20
Explanation: Si intentamos asignar un nuevo valor a un elemento de una tupla obtenemos un error:
End of explanation
parametros = {"parametro1" : 1.0,
"parametro2" : 2.0,
"parametro3" : 3.0,}
print(type(parametros))
print(parametros)
print("parametro1 = " + str(parametros["parametro1"]))
print("parametro2 = " + str(parametros["parametro2"]))
print("parametro3 = " + str(parametros["parametro3"]))
parametros["parametro1"] = "A"
parametros["parametro2"] = "B"
# agrega una nueva entrada
parametros["parametro4"] = "D"
print("parametro1 = " + str(parametros["parametro1"]))
print("parametro2 = " + str(parametros["parametro2"]))
print("parametro3 = " + str(parametros["parametro3"]))
print("parametro4 = " + str(parametros["parametro4"]))
Explanation: Dictionarios
Dictionarios son también como listas, excepto que cada elemento es un par clave-valor. La sintaxsis de los diccionarios es {clave1 : valor1, ...}:
End of explanation
afirmacion1 = False
afirmacion2 = False
if afirmacion1:
print("afirmacion1 es verdadera")
elif afirmacion2:
print("afirmacion2 es verdadera")
else:
print("afirmacion1 y afirmacion2 son falsas")
Explanation: Control de flujo
Sentencias condicionales: if, elif, else
La sintaxis Python para la ejecución condicional de código usa las palabras clave if, elif (else if), else:
End of explanation
afirmacion1 = afirmacion2 = True
if afirmacion1:
if afirmacion2:
print("tanto afirmacion1 como afirmacion2 son verdaderas")
# Mala indentación!
if afirmacion1:
if afirmacion2:
print("tanto afirmacion1 como afirmacion2 son verdaderas") # esta línea está mal indentada
afirmacion1 = False
if afirmacion1:
print("afirmacion1 es verdadera")
print("aun estamos dentro del bloque if")
if afirmacion1:
print("afirmacion1 es verdadera")
print("ahora estamos fuera del bloque")
Explanation: Aquí encontramos por primera un aspecto pecular e inusual del lenguaje Python: Los bloques del programa son definidos por su nivel de indentación (la cantidad de espacio antes de cada linea).
Compare con el código equivalente en C:
if (afirmacion1)
{
printf("afirmacion1 es verdadera\n");
}
else if (afirmacion2)
{
printf("afirmacion1 es verdadera\n");
}
else
{
printf("afirmacion1 y afirmacion2 son falsas\n");
}
En C los bloques son definidos por los paréntesis llaves { y }. El nivel de indentación (espacio en blanco antes del código) no importa (es completamente opcional).
En Python, la extensión de un bloque de código es definido por el nivel de indentación (usualmente un tab o cuatro espacios en blanco). Esto significa que debemos ser cuidados@s de indentar nuestro código correctamente, de lo contrario tendremos errores de sintaxis.
Ejemplos:
End of explanation
for x in [1,2,3]:
print(x)
Explanation: Ciclos
En Python, los ciclos (loops) puede ser programados de varias maneras diferentes. La forma más común es usando un cicle for, que se usa junto con objetos iterables, como por ejemplos las listas. La sintaxis básica es:
Ciclos for:
End of explanation
for x in range(4): # por defecto range comienza con 0
print(x)
Explanation: El ciclo for itera sobre los elementos de la lista suministrada y ejecuta el bloque suministrado una vez para cada elemento. Cualquier tipo de lista puede ser usada para un ciclo for. Por ejemplo:
End of explanation
for x in range(-3,3):
print(x)
for palabra in ["computación", "científica", "con", "Python"]:
print(palabra)
Explanation: Nota: range(4) no incluye el 4 !
End of explanation
for clave, valor in parametros.items():
print(clave + " = " + str(valor))
Explanation: Para iterar sobre pares clave-valor en un diccionario:
End of explanation
for idx, x in enumerate(range(-3,3)):
print(idx, x)
Explanation: Algunas veces es útil tener acceso a los índices de los valores mientras se itera sobre una lista. Podemos usar la función enumerate para esto:
End of explanation
l1 = [x**2 for x in range(0,5)]
print(l1)
Explanation: Listas: Creando listas usando ciclos for:
Una forma conveniente y compacta de inicializar listas:
End of explanation
i = 0
while i < 5:
print(i)
i = i + 1
print("listo")
Explanation: Ciclos while:
End of explanation
def func0():
print("test")
func0()
Explanation: Note que el comando print("listo") no es parte del cuerpo del ciclo while, debido a su indentación.
Funciones
En Python una función es definida usando la palabra clave def, seguida de un nombre para la función, una variable entre paréntesis (), y el símbolo de dos puntos :. El siguiente código, con un nivel adicional de indentación, is el cuerpo de la función.
End of explanation
def func1(s):
Imprime la cadena 's' y dice cuántos caracteres tiene
print(s + " tiene " + str(len(s)) + " caracteres")
help(func1)
func1("test")
Explanation: En forma opcional, pero muy recomendada, podemos definir un "docstring", que es una descripción del propósito y comportamiento de la función. El docstring debería ser incluido directamente después de la definición de la función, antes del código en el cuerpo de la función.
End of explanation
def cuadrado(x):
Calcula el cuadrado de x.
return x**2
cuadrado(4)
Explanation: Funciones que retornan un valor usan la palabra clave return:
End of explanation
def potencias(x):
Calcula algunas potencias de x.
return x**2, x**3, x**4
potencias(3)
x2, x3, x4 = potencias(3)
print(x3)
Explanation: Podemos retornar múltiples valores desde una función usando las tuplas (ver más arriba):
End of explanation
def mifunc(x, p=2, debug=False):
if debug:
print("evaluando mifunc para x = " + str(x) + " usando el exponente p = " + str(p))
return x**p
Explanation: Argumentos por defecto y argumentos de palabra clave
En la definición de una función, podemos asignar valores por defecto a los argumentos de la función:
End of explanation
mifunc(5)
mifunc(5, debug=True)
Explanation: Si no suministramos un valor para el argumento debug al llamar a la función mifunc se considera el valor definido por defecto:
End of explanation
mifunc(p=3, debug=True, x=7)
Explanation: Si listamos explícitamente el nombre de los argumentos al llamar una función, ellos no necesitan estar en el mismo orden usando en la definición de la función. Esto es llamado argumentos de palabra clave (keyword), y son a menudo muy útiles en funciones que requieren muchos argumentos opcionales.
End of explanation
f1 = lambda x: x**2
# es equivalente a
def f2(x):
return x**2
f1(2), f2(2)
Explanation: Funciones sin nombre (funciones lambda)
En Python podemos también crear funciones sin nombre, usando la palabra clave lambda:
End of explanation
# map es una función predefinida en Python
map(lambda x: x**2, range(-3,4))
# in Python 3 podemos usar `list(...)` para convertir la iteración a una lista explícita
list(map(lambda x: x**2, range(-3,4)))
Explanation: Esta técnica es útil, por ejemplo, cuando queremos pasar una función simple como argumento de otra función, como en este caso:
End of explanation
class Punto:
Clase simple para representar un punto en un sistema de coordenadas cartesiano.
def __init__(self, x, y):
Crea un nuevo punto en x, y.
self.x = x
self.y = y
def traslada(self, dx, dy):
Traslada el punto en dx y dy en las direcciones x e y respectivamente.
self.x += dx
self.y += dy
def __str__(self):
return("Punto en [%f, %f]" % (self.x, self.y))
Explanation: Clases
Las clases son una característica clave de la programación orientada al objeto. Una clase es una estructura para representar un objeto y las operaciones que pueden ser realizadas sobre el objeto.
En Python una clase puede contener atributos (variables) y métodos (funciones).
En Python una clase es definida casi como una función, pero usando la palabra clave class, y la definición de la clase usualmente contiene algunas definiciones de métodos (una función en una clase).
Cada método de una clase debería tener un argumento self como su primer argumento. Este objeto es una autoreferencia.
Algunos nombres de métodos de clases tienen un significado especial, por ejemplo:
__init__: El nombre del método que es invocado cuando el objeto es creado por primera vez.
__str__ : Un método que es invocado cuando se necesita una simple representación de cadena de la clase, como por ejemplo cuando se imprime.
Existen muchas más, ver http://docs.python.org/2/reference/datamodel.html#special-method-names
End of explanation
p1 = Punto(0, 0) # eso invoca el método __init__ en la cláse Punto
print(p1) # esto invoca el método __str__
Explanation: Para crear una nuva instancia de una clase:
End of explanation
p2 = Punto(1, 1)
p1.traslada(0.25, 1.5)
print(p1)
print(p2)
Explanation: Para invocar un método en la instancia de clase p:
End of explanation
import math
Explanation: Note que llamar a métodos de clases puede modificar el estado de esa instancia de clase particular, pero no afecta otras instancias de la clase o alguna otra variable global.
Esto es una de las cosas buenas de un diseño orientado al objeto: código como las funciones y variables relacionadas son agrupadas en entidades separadas e independientes.
Módulos
La mayoría de la funcionalidad en Python es provista por módulos. La Librería Estándar de Python es una gran colección de módulos que proveen implementaciones multiplataforma de recursos tales como el acceso al sistema operativo, entrada/salido de archivos (file I/O), manejo de cadenas, comunicación en redes, y mucho más.
Para usar un módulo en un programa Python éste debe primero ser importado. Un módulo puede ser importado usando el comando import. Por ejemplo, para importar el módulo math, que contiene muchas funciones matemáticas estándar, podemos usar:
End of explanation
import math
x = math.cos(2 * math.pi)
print(x)
Explanation: Esto incluye el módulo completo y lo deja disponible para su uso en el programa. Por ejemplo, podemos escribir:
End of explanation
from math import *
x = cos(2 * pi)
print(x)
Explanation: Alternativamente, podemos elegir importar todos los símbolos (funciones y variables) en un módulo al espacio de nombres (namespace) actual (de modo que no necesitemos usar el prefijo "math." cada vez que usemos algo del módulo math:
End of explanation
from math import cos, pi
x = cos(2 * pi)
print(x)
Explanation: Esta forma de proceder puede ser muy conveniente, pero en programas largos que incluyen muchos módulos es a menudo una buena idea mantener los símbolos de cada módulo en sus propios espacios de nombres, usando import math. Esto elimina potenciales confusiones con eventuales colisiones de nombres.
Como una tercera alternativa, podemos importar sólo algunos símbolos seleccionados desde un módulo listando explícitamente aquellos símbolos que deseamos importar, en lugar de usar el carácter comodín *:
End of explanation
import math
dir(math)
Explanation: Mirando qué contiene un módulo, y su documentación
Luego que se ha cargado un módulo, podemos listar los símbolos que éste provee usando la función dir:
End of explanation
help(math.log)
log(10) # calcula el logaritmo de 10 en base e
log(10, 2) # calcula el logaritmo de 10 en base 2
Explanation: Usando la función help podemos obtener una descripción de cada función (casi... no todas las funciones tienen docstrings, como se les llama técnicamente. Sin embargo, la mayoría de las funciones están documentadas de esta forma).
End of explanation
%more mimodulo.py
Explanation: También podemos usar la función help directamente sobre los módulos:
help(math)
Algunos módulos muy útiles de la librería estándar de Python son os (interfaz con el sistema operativo), sys (Parámetros y funciones específicas del sistema), math (funciones matem'aticas), shutil (operaciones con archivos), subprocess, multiprocessing, threading.
Una lista completa de los módulos estándar para Python 2 y Python 3 está disponible (en inglés) en http://docs.python.org/2/library/ y http://docs.python.org/3/library/, respectivamente. Una versión en español está disponible en http://pyspanishdoc.sourceforge.net/lib/lib.html.
Creación de Módulos
Uno de los conceptos más importantes en programación es el de reusar código para evitar repeticiones.
La idea es escribir funciones y clases con un propósito y extensión bien definidos, y reusarlas en lugar de repetir código similar en diferentes partes del programa (programación modular). Usualmente el resultado es que se mejora ostensiblemente la facilidad de lectura y de mantención de un programa. En la práctica, esto significa que nuestro programa tendrá menos errores, y serán más fáciles de extender y corregir.
Python permite programación modular en diferentes niveles. Las funciones y las clases son ejemplos de herramientas para programación modular de bajo nivel. Los módulos Python son construcciones de programación modular de más alto nivel, donde podemos colectar variables relacionadas, funciones y clases. Un módulo Python es definido en un archivo Python (con extensión .py), y puede ser accequible a otros módulos Python y a programas usando el comendo import.
Considere el siguiente ejemplo: el archivo mimodulo.py contiene una implementación simple de una variable, una función y una clase:
End of explanation
import mimodulo
Explanation: Podemos importar el módulo mimodulo a un programa Python usando import:
End of explanation
help(mimodulo)
mimodulo.mi_variable
mimodulo.mi_function()
mi_clase = mimodulo.MiClase()
mi_clase.set_variable(10)
mi_clase.get_variable()
Explanation: Use help(module) para obtener un resumen de lo que suministra el módulo:
End of explanation
raise Exception("descripción del error")
Explanation: Excepciones
En Python los errores son manejados con una construcción especial de lenguaje llamada "Exceptions" (excepciones). Cuando ocurre un error, una excepción puede ser hecha, que interrumpe el flujo normal del programa y retorna a algún otro lugar del código donde se definan los comandos try-except más cercanos.
Para generar una excepción podemos usar el comando raise, que toma un argumento que debe ser una instancia de la clase BaseExpection o una clase derivada de ella.
End of explanation
try:
print("test")
# genera un error: ya que la variable test no está definida
print(test)
except:
print("Encontré una excepción")
Explanation: Un úso típico de las excepciones es para abortar funciones cuando ocurre algún error, por ejemplo:
def mi_funcion(argumentos):
if not verify(argumentos):
raise Expection("Argumentos invalidos")
# el resto del código sigue aquí
Para capturar los errores que son generados por funciones y métodos de clases, o por el mismo intérprete Python, use los comandos try y except:
try:
# aquí va el código normal
except:
# el código para manejar el error va aquí
# Este código no se ejecuta a menos que
# el código de arriba genere un error
Por ejemplo:
End of explanation
try:
print("test")
# genera un error: ya que la variable test no está definida
print(test)
except Exception as e:
print("Encontré una excepción:" + str(e))
Explanation: Para obtener información sobre un error, podemos accesar la instancia de clase Exception que describe la excepción usando por ejemplo:
except Exception as e:
End of explanation
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = './css/aeropython.css'
HTML(open(css_file, "r").read())
Explanation: Lectura adicional
http://www.python.org - The official web page of the Python programming language.
http://www.python.org/dev/peps/pep-0008 - Guía de estilo para la programación en Python. Altamente recomendada (en inglés).
http://www.greenteapress.com/thinkpython/ - Un libro gratuito sobre Python.
Python Essential Reference - Un buen libro de referencia sobre programación en Python.
End of explanation |
15,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Using Markdown
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
Step1: Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
Step2: Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Some Markdown data
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
Step3: Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.
You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third.
You should have seen the output change to "your name is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
Step4: Fixing Data Types. | Python Code:
# Hit shift + enter or use the run button to run this cell and see the results
print 'hello world11_0_11'
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
print 2 + 2 # The result of this line will not be displayed
print 3 + 3 # The result of this line will be displayed, because it is the last line of the cell
Explanation: Text Using Markdown
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
End of explanation
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
Explanation: Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
End of explanation
class_name = "BRUCE Woodley Intro to Data Analysis"
message = class_name + " is awesome!"
message
Explanation: Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Some Markdown data
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
End of explanation
import unicodecsv
with open("enrollments.csv","rb") as filein :
line = unicodecsv.DictReader(filein)
print("type(line) \t",type(line))
enrollments = list(line)
print enrollments[0]
import unicodecsv
with open("daily_engagement.csv","rb") as filein :
line = unicodecsv.DictReader(filein)
#print("type(line) \t",type(line))
daily_engagement = list(line)
print daily_engagement[0]
import unicodecsv
with open("project_submissions.csv","rb") as filein :
line = unicodecsv.DictReader(filein)
project_submissions_fieldnames = line.fieldnames
#print("type(line) \t",type(line))
print("project_submissions_fieldnames = ",str(project_submissions_fieldnames))
project_submissions = list(line)
print project_submissions[0]
Explanation: Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.
You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third.
You should have seen the output change to "your name is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
End of explanation
# Fixing Data Types.
# Hit shift + enter or use the run button to run this cell and see the results
from datetime import datetime as dt
# Takes a date as a string, and returns a Python datetime object.
# If there is no date given, returns None
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_maybe_int(i):
if i == '':
return None
else:
return int(i)
print(" type(enrollment) " , type(enrollment))
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'
enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'
enrollment['join_date'] = parse_date(enrollment['join_date'])
enrollments[0]
# enrollments
# daily_engagement
# project_submission
# these are all a "List of Dictionaries"
import sys
import os
import string
import time
#print(type(enrollments),len(enrollments) )
enrollments_set = set()
for line in enrollments :
enrollments_set.add(line['account_key'] )
print("enrollments",type(enrollments), " row total: ",len(enrollments), " total students: ", len(enrollments_set) )
#print(type(daily_engagement), len(daily_engagement) )
daily_engagement_set = set()
for line in daily_engagement :
daily_engagement_set.add(line['acct'] )
print("daily_engagement", type(daily_engagement)," row total: ",len(daily_engagement), " total students: ", len(daily_engagement_set) )
#print(type(project_submissions), len(project_submissions) )
project_submissions_set = set()
for line in project_submissions :
project_submissions_set.add(line['account_key'] )
print("project_submissions", type(project_submissions)," row total: ",len(project_submissions), " total students: ", len(project_submissions_set) )
print(" ")
print('REM: these are all a "List of Dictionaries"...!')
Explanation: Fixing Data Types.
End of explanation |
15,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LV3 Recovery Test
This notebook will encompass all calculations regarding the LV3 Recovery/eNSR Drop Test.
Resources
[http
Step1: input parameters
flight plan
Step2: physical parameters
Step3: Calculations
Convert wind directions into aircraft coordinates
Step4: Step a - static line extending
Assumptions
Step5: Step b - deployment timer running
deployment timer is a 2 sec timer
Assumptions
neglecting drag force
Variables
P = speed of plane
vb = velocity after 2m static line has extended (aka instant static line 'snaps')
g = acceleration due to gravity
Lvb = vertical length gained from step b
Lhb = horizontal length gained from step b
tb = time for step b to be complete
calculations
Step6: Step c - eNSR ring separation
Assumptions
Step7: Step d - drogue line is being pulled out
Assumptions
no drag force considered for the payload for horizon. and vert. decent until drogue is fully unfurled
just accounting for the 50' shock chord, therefore not including the lines coming directly from the 'chute
the drogue pulls out at an angle due to a small amount of drag on the drogue slowing it down horizontally
Variables
P = speed of plane
vd = velocity after 50' shock chord is drawn out
g = acceleration due to gravity
Lvd = vertical distance gained from step d
Lhd = horizontal distance gained from step d
td = time for step d to be complete
dl = droge line length
the 50' chord as the hypotenuse
$50 = \sqrt{(x^2) + (y^2)}$
vertical length gained from step d
$Lvd = vctd + 0.5g*(td^2)$
horizontal length gained from step d
$Lhd = P*td$
calculate td by replacing x and y in the above equation
$dl^2 = (Ptd)^2 + (vctd + g*td^2)^2$
calculations
Step8: Step e - drogue is fully deployed
Assumptions
drag force in full effect
skipping impulse and time to steady state
Variables
cd = coeff. of drag [unitless]
D = drag force = mass of payload*g [N]
rho = density of air [kg/m^3]
A = area of parachute [m^2]
v = approx. steady state velocity of drogue [m/s]
m = mass of payload [kg]
Rd = drogue radius [m]
w = wind speed [m/s]
governing equations
Just start with Newton's 2nd law. The $-1/2\rho$ stuff is the drag force. It's negative because it opposes the motion. The biz with the $|\dot{\vec r}|\dot{\vec r}$ is to get a vector that has the magnitude of $r^2$ and the direction of $\vec r$.
$
m\ddot{\vec r} = -1/2\rhoA_dCd|\dot{\vec r}|\dot{\vec r} +m\vec g\
$
Break it out into components. (This is where we see that it's an ugly coupled diffeq.)
$
m \ddot r_x = -1/2\rhoA_dCd\sqrt{\dot r_x^2+\dot r_y^2}\dot r_x \
m\ddot r_y = -1/2\rhoA_dCd\sqrt{\dot r_x2+\dot r_y2}\dot r_y -m*g
$
numerical solution
Step9: plots
Step10: results
Step11: old calculations
Step12: Step f - main 'chute fully deployed
If you want to justify to yourself that the main chute hits terminal velocity [almost] instantly, you can mess with the inputs for the numerical solution in step e.
Assumptions
drag force in full effect
skipping impulse and time to steady state
main 'chute is a full 18' in dia.
after payload has gone through the drogue decent, the horizontal velocity is the same as the wind speed
Variables
cd = coeff. of drag [unitless]
D = drag force = weight of payload*g [N]
rho = density of air [kg/m^3]
A = area of parachute [m^2]
v_main = approx. steady state velocity of main 'chute [m/s]
m = mass of payload [kg]
w = wind speed [m/s]
calculations
Step13: results
Step14: Results
totals
Step15: trajectories relative to drop point (aircraft coordinates)
Step16: trajectories relative to drop point (East-North coordinates) | Python Code:
import math
import sympy
from sympy import Symbol, solve
from scipy.integrate import odeint
from types import SimpleNamespace
import numpy as np
import matplotlib.pyplot as plt
sympy.init_printing()
%matplotlib inline
Explanation: LV3 Recovery Test
This notebook will encompass all calculations regarding the LV3 Recovery/eNSR Drop Test.
Resources
[http://www.usma.edu/math/Military%20Math%20Modeling/C5.pdf]
[http://www.the-rocketman.com/drogue-decent-rate.html]
[http://wind.willyweather.com/or/crook-county/prineville-reservoir.html]
* wind = NW 5.8mph
setup
imports
End of explanation
# P = speed of plane [m/s]
P = 38
# wind speed [m/s]
w = 2.59283
# wind bearing, measured from east to north [degrees]
theta_w_deg= 360-45
# aircraft bearing, measured from east to north [degrees]
theta_a_deg= 90+45
# safety distance above the ground that the main chute should deploy at [m]
mainSafetyDist= 304.8 # 1000 ft = 304.8 m
Explanation: input parameters
flight plan
End of explanation
# worst case wind speed [m/s]
# w_worst= 12.86
# mass of payload [kg]
m = 28
# g = acceleration due to gravity [kg*m/s^2]
g = 9.81
# density of air [kg/m^3]
rho = 1.2
# terminal velocity of drogue [m/s]
vt_d= 18.5 # according to rocketman
# radius of drogue [m]
R_d= 0.762
# static line length [m]
sl = 2
# drogue line length [m]
dl= 50
# terminal velocity of main chute [m/s]
vt_m= 4.83108 # according to rocketman
# radius of main chute [m]
R_m= 5.4864
Explanation: physical parameters
End of explanation
# wind speed in the direction of flight
theta_a= theta_a_deg*2*np.pi/360
theta_w= theta_w_deg*2*np.pi/360
wx= w*np.cos(theta_w-theta_a)
# cross-wind speed from left to right (pilot's perspective)
wz= -w*np.sin(theta_w-theta_a)
Explanation: Calculations
Convert wind directions into aircraft coordinates
End of explanation
va = 0
## vertical distance gained
## since the static line is 2m, assuming it falls only in the vertical direction:
Lva = sl
# horizontal distance gained
## speed of plane times time to drop 2m static line
## 1/2*g*ta**2 = sl
ta = math.sqrt(2*sl/g)
Lha = P*ta
print('step a (from drop to static line disconnect):')
print('time to free fall fall 2 m = ', round(ta,4), ' s')
print('vertical length gained = ', round(Lva,4), ' m')
print('horizontal length gained = ', round(Lha,4), ' m')
Explanation: Step a - static line extending
Assumptions:
no drag
static line is approximately 2m long
plane is flying at approximately 85 mph = 38 m/s
Variables
va = vertical velocity at instant the system is pushed from the plane [m/s]
sl = static line length [m]
Lva = vertical length gained from step a [m]
Lha = horizontal length gained from step a [m]
ta = time for step a to be complete [s]
calculations
End of explanation
# vertical velocity at end of static line, beginning of timer
vb = va + (g*ta)
# since the deployment is controlled by a 2 sec timer:
tb = 2
# vertical length gained
Lvb = (vb*tb) + (0.5*g*(tb**2))
# horizontal length gained
Lhb = P*tb
print('step b (from static line disconnect to timer runout):')
print('vertical velocity at beginning of step b = ', round(vb,4), ' m/s')
print('vertical length gained = ', round(Lvb,4), ' m')
print('horizontal length gained = ', round(Lhb,4), ' m')
Explanation: Step b - deployment timer running
deployment timer is a 2 sec timer
Assumptions
neglecting drag force
Variables
P = speed of plane
vb = velocity after 2m static line has extended (aka instant static line 'snaps')
g = acceleration due to gravity
Lvb = vertical length gained from step b
Lhb = horizontal length gained from step b
tb = time for step b to be complete
calculations
End of explanation
# velocity at time of ring separation, end of timer
vc = vb + g*tb
Lhc = 0
Lvc = 0
print('vertical velocity at ring separation = ', round(vc,4), ' m/s')
Explanation: Step c - eNSR ring separation
Assumptions:
This step only lasts for an instant; i.e. has no duration
drogue timer begins as ring separation occurs
Variables
P = speed of plane
vc = velocity at time of ring separation
g = acceleration due to gravity
Lvc = vertical length gained from step c
Lhc = horizontal length gained from step c
tc = time for step c to be complete
calculations
End of explanation
Ps, vds, gs, Lvds, Lhds, tds, vcs = sympy.symbols('Ps vds gs Lvds Lhds tds vcs')
Dparms= {Ps: P, gs: g, vcs: vc}
tdEqn= (Ps*tds)**2 + (vcs*tds + 0.5*gs*tds**2)**2 - dl**2
tdSolns= sympy.solve(tdEqn.subs(Dparms))
print('possible solutions:', tdSolns)
for soln in [complex(x) for x in tdSolns]:
if (soln.imag != 0) or (soln.real <= 0):
pass
else:
print(soln, 'seems fine')
td= soln.real
# now go back and calculate x and y
Lhd = P*td
Lvd = vc*td + g*(td**2)
# vertical velocity gained after the 50' drop
vd = vc + g*td
print()
print('time to pull out drogue:', round(td,4), 's')
print('horizontal distance gained = ', round(Lhd,4), 'm')
print('vertical distance gained = ', round(Lvd,4), 'm')
print('vertical velocity at instant line becomes taught = ', round(vd,4), 'm/s')
print('horizontal velocity: ', P, 'm/s')
Explanation: Step d - drogue line is being pulled out
Assumptions
no drag force considered for the payload for horizon. and vert. decent until drogue is fully unfurled
just accounting for the 50' shock chord, therefore not including the lines coming directly from the 'chute
the drogue pulls out at an angle due to a small amount of drag on the drogue slowing it down horizontally
Variables
P = speed of plane
vd = velocity after 50' shock chord is drawn out
g = acceleration due to gravity
Lvd = vertical distance gained from step d
Lhd = horizontal distance gained from step d
td = time for step d to be complete
dl = droge line length
the 50' chord as the hypotenuse
$50 = \sqrt{(x^2) + (y^2)}$
vertical length gained from step d
$Lvd = vctd + 0.5g*(td^2)$
horizontal length gained from step d
$Lhd = P*td$
calculate td by replacing x and y in the above equation
$dl^2 = (Ptd)^2 + (vctd + g*td^2)^2$
calculations
End of explanation
# make a function that translates our equtions into odeint() format
def dragFunc(y, t0, p):
# map the positions and velocities to convenient names:
r_x= y[0]
r_y= y[1]
r_z= y[2]
rdot_x= y[3]
rdot_y= y[4]
rdot_z= y[5]
# calculate the accelerations:
rddot_x= 1/p.m*(-1/2*p.rho*p.A*p.Cd*np.sqrt((rdot_x-p.wx)**2+rdot_y**2+(rdot_z-p.wz)**2)*(rdot_x-p.wx))
rddot_y= 1/p.m*(-1/2*p.rho*p.A*p.Cd*np.sqrt((rdot_x-p.wx)**2+rdot_y**2+(rdot_z-p.wz)**2)*rdot_y -p.m*p.g)
rddot_z= 1/p.m*(-1/2*p.rho*p.A*p.Cd*np.sqrt((rdot_x-p.wx)**2+rdot_y**2+(rdot_z-p.wz)**2)*(rdot_z-p.wz))
# return the velocities and accelerations:
return([rdot_x, rdot_y, rdot_z, rddot_x, rddot_y, rddot_z])
D_d = m*g # drag force on drogue at terminal velocity [N]
A_d = math.pi*(R_d**2) # frontal area of drogue [m^2]
cd_d = (2*D_d)/(rho*A_d*vt_d**2) # drag coeff. of drogue []
# bundle up the parameters needed by dragFunc():
pd = SimpleNamespace()
pd.rho = rho
pd.A = A_d
pd.Cd = cd_d
pd.m = m
pd.g = g
pd.wx = wx
pd.wz = wz
# set the boundary conditions for the solver:
y0 = [0,0,0, P, -vd, 0]
t_step = 0.001
t_start = 0
t_final = 10
times_d = np.linspace(t_start, t_final, (t_final-t_start)/t_step)
# run the simulation:
soln_d = odeint(func= dragFunc, y0= y0, t= times_d, args= (pd,))
# find the time when it's okay to deploy the main chute:
# for i in range(0, len(soln)):
# if (soln_d_xddot[i] < 0.01*soln_d_xddot[0]) and (soln_d_yddot[i] < 0.01*soln_d_yddot[0]):
# print('At time', round(times_d[i],4), 'x and y acceleration are below 1% their original values.')
# tcr_d= times_d[i]
# break
# chop of the stuff after the critical time:
#soln= soln[range(0,i)]
#times= times[range(0,i)]
Explanation: Step e - drogue is fully deployed
Assumptions
drag force in full effect
skipping impulse and time to steady state
Variables
cd = coeff. of drag [unitless]
D = drag force = mass of payload*g [N]
rho = density of air [kg/m^3]
A = area of parachute [m^2]
v = approx. steady state velocity of drogue [m/s]
m = mass of payload [kg]
Rd = drogue radius [m]
w = wind speed [m/s]
governing equations
Just start with Newton's 2nd law. The $-1/2\rho$ stuff is the drag force. It's negative because it opposes the motion. The biz with the $|\dot{\vec r}|\dot{\vec r}$ is to get a vector that has the magnitude of $r^2$ and the direction of $\vec r$.
$
m\ddot{\vec r} = -1/2\rhoA_dCd|\dot{\vec r}|\dot{\vec r} +m\vec g\
$
Break it out into components. (This is where we see that it's an ugly coupled diffeq.)
$
m \ddot r_x = -1/2\rhoA_dCd\sqrt{\dot r_x^2+\dot r_y^2}\dot r_x \
m\ddot r_y = -1/2\rhoA_dCd\sqrt{\dot r_x2+\dot r_y2}\dot r_y -m*g
$
numerical solution
End of explanation
# break out the solutions into convenient names:
soln_d_x= [s[0] for s in soln_d]
soln_d_y= [s[1] for s in soln_d]
soln_d_z= [s[2] for s in soln_d]
soln_d_xdot= [s[3] for s in soln_d]
soln_d_ydot= [s[4] for s in soln_d]
soln_d_zdot= [s[5] for s in soln_d]
soln_d_xddot= np.diff(soln_d_xdot) # x acceleration
soln_d_yddot= np.diff(soln_d_ydot) # y acceleration
soln_d_zddot= np.diff(soln_d_zdot) # z acceleration
# plot da shiz:
plt.figure(1)
plt.plot(soln_d_x, soln_d_y)
plt.axis('equal')
plt.xlabel('horizontal range (m)')
plt.ylabel('vertical range (m)')
plt.figure(2)
plt.plot(times_d, soln_d_xdot)
plt.xlabel('time (s)')
plt.ylabel('horizontal velocity (m/s)')
plt.figure(3)
plt.plot(times_d, soln_d_ydot)
plt.xlabel('time (s)')
plt.ylabel('vertical velocity (m/s)')
plt.figure(4)
plt.plot(times_d[range(0, len(soln_d_xddot))], soln_d_xddot)
plt.xlabel('time (s)')
plt.ylabel('horizontal acceleration (m/s^2)')
plt.figure(5)
plt.plot(times_d[range(0, len(soln_d_yddot))], soln_d_yddot)
plt.xlabel('time (s)')
plt.ylabel('vertical acceleration (m/s^2)')
Explanation: plots
End of explanation
Lhe = soln_d_x[-1]
Lve = -soln_d_y[-1]
Lle = soln_d_z[-1]
te = times_d[-1]
print('horizontal distance travelled in step e:', Lhe)
print('vertical distance travelled in step e:', Lve)
print('lateral distance travelled in step e:', Lle)
print('time taken in step e:', te)
Explanation: results
End of explanation
# x-direction calculations
##########################
# from usma:
# mx" + cd*x' = cd*w
####### need python help here #######
# ugh, I have to go learn how to use scipy... 1 sec -- Joe
# mx" + cd*x
## homogeneous equation mx" + rho*x' = 0
## characteristic equation for the homogeneous differential equation is:
## mr^2 + rho*r = 0
## where the roots are:
## r1 = 0, r2 = -(rho/m)
## complementary solution:
## xc = C1*e^0 + C2* e^(-(rho*t/m))
## non-homogeneous equation mx" + rho*x' = rho*w
## complete solution x = C1 + C2*e^(-(rho*t/m)) + wt
## solving for C1 and C2 using results from step d as initial conditions
## except time = 0 since we are making calculations just for this step
## i.e. x(0) = x_curr_tot and vx(0) = P
## therefore C1 = and C2 =
# x_0 = Lha + Lhb + Lhc + Lhd
# t = 0
# vx_0 = P
# C1 = Symbol('C1')
# C2 = Symbol('C2')
# C_1 = solve(C1 + C2*math.exp(-(rho*t/m)) + w*t - x_0, C1)
# C_1
# print(C_1)
# C_2 = solve(C2*(-(rho/m)) + w - vx_0, C2)
# print(C_2)
# ## NEEEED HELLLPPP should be using piecewise to solve this
# ## copying C_1 output from just above with the C_2 value
# calc_C1 = 147.560492558936 + 586.6
# print(calc_C1)
#
# ## therefore the complete solution is:
# ## x = 734.1605 - 586.6*exp(-(rho*t/m)) + w*t
#
# ## if the drogue falls for 3 seconds, then
# t = 3
# Lhe = 734.1605 - 586.6*math.exp(-(rho*t/m)) + w*t
#
# print('horizontal distance gained = ', round(Lhe,4), 'm')
# print(' ')
#
# # y-direction calculations
# ##########################
#
# ## from usma
# ## characteristic equation:
# ## m*r^2 + rho*r = 0
# ## where the roots are r = 0 and r = (-b/m)
#
# ## complete solution:
# ## y = C1 + C2*exp(-(rho*t)/m)
# ## solving for C1 and C2 using results from step d as initial conditions
# ## except time = 0 since we are making calculations just for this step
#
# y_0 = Lva + Lvb + Lvc + Lvd
# print('y_0 = ', y_0)
# vy_0 = vd
# print('vy_0 = ',vy_0)
# t_0 = 0
# C1 = Symbol('C1')
# C2 = Symbol('C2')
# ## NEEEED HELLLPPP should be using piecewise to solve this
# # C1 equation
# C_1 = solve(C1 + C2*math.exp(-(rho*t_0/m)) - y_0, C1)
# print('C1 equation: ', C_1)
# # C2 equation/value
# C_2 = solve(C2*(-(rho/m)*math.exp(-(rho*t_0/m))) - vy_0, C2)
# print('C2 = ', C_2)
# ## copying C_1 output from just above with the C_2 value
# calc_C1 = 793.253769802079 + 62.2619406518579 #62.2619406518579 + (0.879350749407306*793.253769802079)
# print('C1 = ', calc_C1)
#
# # NEED HELP: need to make C_2 a number (int, float)
# ## if the drogue falls for 3 seconds, then
# t = 3
# Lve = calc_C1 + (-793.253769802079*math.exp(-(rho/m)*t))
#
# print('vertical distance gained = ', Lve, 'm')
#
# ## Maayybbbeee
#
# vert_length = v*t
# print(vert_length)
Explanation: old calculations
End of explanation
Lvf= mainSafetyDist
# step f time = vertical distance / main terminal velocity
tf= Lvf/vt_m
# horizontal distance= wind speed * step f time
Lhf= wx*tf
Llf= wz*tf
Explanation: Step f - main 'chute fully deployed
If you want to justify to yourself that the main chute hits terminal velocity [almost] instantly, you can mess with the inputs for the numerical solution in step e.
Assumptions
drag force in full effect
skipping impulse and time to steady state
main 'chute is a full 18' in dia.
after payload has gone through the drogue decent, the horizontal velocity is the same as the wind speed
Variables
cd = coeff. of drag [unitless]
D = drag force = weight of payload*g [N]
rho = density of air [kg/m^3]
A = area of parachute [m^2]
v_main = approx. steady state velocity of main 'chute [m/s]
m = mass of payload [kg]
w = wind speed [m/s]
calculations
End of explanation
print('horizontal distance travelled in step f:', Lhf, 'm')
print('vertical distance travelled in step f:', Lvf, 'm')
print('time taken in step f:', tf, 's')
Explanation: results
End of explanation
# TOTAL HORIZONTAL DISTANCE TRAVELED
X_TOT = Lha + Lhb + Lhc + Lhd + Lhe + Lhf
X_TOT_ft = X_TOT*3.28084
print('TOTAL HORIZONTAL DISTANCE TRAVELED = ', round(X_TOT,2), 'm ', ' = ', round(X_TOT_ft,2), 'ft')
# TOTAL VERTICAL DISTANCE DESCENDED
Y_TOT = Lva + Lvb + Lvc + Lvd + Lve + Lvf
Y_TOT_ft = Y_TOT*3.28084
print('TOTAL VERTICAL DISTANCE DESCENDED = ', round(Y_TOT,2), 'm ', ' = ', round(Y_TOT_ft,2), 'ft')
# TOTAL TIME FOR DESCENT
T_TOT = ta + tb + td + te + tf
# in minutes
t_tot_min = T_TOT/60
print('TOTAL TIME FOR DESCENT', round(T_TOT,2), 's = ', round(t_tot_min,2), 'min')
Explanation: Results
totals
End of explanation
delta_xs= np.array([0, Lha, Lhb, Lhc, Lhd, Lhe, Lhf])
delta_ys= -np.array([0, Lva, Lvb, Lvc, Lvd, Lve, Lvf])
delta_zs= np.array([0, 0, 0, 0, 0, Lle, Llf])
xs= np.cumsum(delta_xs)
ys= np.cumsum(delta_ys)
zs= np.cumsum(delta_zs)
plt.close('all')
plt.figure(1)
plt.plot(xs,ys)
_= plt.axis('equal')
plt.grid()
plt.title('down-range trajectory')
plt.xlabel('down-range distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
plt.figure(2)
plt.plot(zs, ys)
_= plt.axis('equal')
plt.grid()
plt.title('lateral trajectory')
plt.xlabel('lateral (left to right) distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
print('xs:', xs)
print('ys:', ys)
print('zs:', zs)
print('note that Y is up and Z is to the right of the aircraft... because I don\'t want to change my code.')
Explanation: trajectories relative to drop point (aircraft coordinates)
End of explanation
Es= xs*np.cos(theta_a) +zs*np.sin(theta_a)
Ns= xs*np.sin(theta_a) -zs*np.cos(theta_a)
plt.figure(2)
plt.plot(Es,ys)
_= plt.axis('equal')
plt.grid()
plt.title('east trajectory')
plt.xlabel('eastern distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
plt.figure(3)
plt.plot(Ns, ys)
_= plt.axis('equal')
plt.grid()
plt.title('north trajectory')
plt.xlabel('northern distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
print('Es:', Es)
print('ys:', ys)
print('Ns:', Ns)
Explanation: trajectories relative to drop point (East-North coordinates)
End of explanation |
15,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementation of a Radix-2 Fast Fourier Transform
Import standard modules
Step3: This assignment is to implement a python-based Fast Fourier Transform (FFT). Building on $\S$ 2.8 ➞ we will implement a 1-D radix-2 Cooley-Tukey-based FFT using both decimation in time (DIT) and decimation in frequency (DIF) for an $N = 2^n$ input function.
From $\S$ 2.8.2 ➞ the discrete Fourier transform (DFT) is defined as
Step5: In $\S$ 2.8.6 ➞ the fast Fourier transform was introduced as using recursion to implement a Fourier transform in $\mathcal{O}(N\log_2N)$ computations, significantly reducing the computational cost of computing the Fourier transform, especially for large $N$. A 'one layer' fast Fourier transform was presented which split the input function into two, and applied the twiddle factor to all values in the layer before calling the matrix-based DFT. This code is replicated below.
Step6: We can easily show that each of these functions produce the same results by introducting a discrete test function $x$ and showing that the same results are reported by each function call
Step7: We can also time each function to report of the amount of time is takes to return a finished spectrum.
Step8: As we can see the matrix DFT is significatly faster than the double loop DFT, this is because of the fast vectorization functions in numpy. And, the 'one-layer' FFT is about twice as fast as the matrix DFT because of the FFT architecture. We can go one fast and use the built-in numpy FFT | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
import cmath
Explanation: Implementation of a Radix-2 Fast Fourier Transform
Import standard modules:
End of explanation
def loop_DFT(x):
Implementing the DFT in a double loop
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
X = np.zeros(N, dtype=complex)
for k in range(N):
for n in range(N):
X[k] += np.exp(-1j * 2.0* np.pi* k * n / N) * x[n]
return X
def matrix_DFT(x):
Implementing the DFT in vectorised form
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
n = np.arange(N)
k = n.reshape((N,1))
K = np.exp(-1j * 2.0 * np.pi * k * n / N)
return K.dot(x)
Explanation: This assignment is to implement a python-based Fast Fourier Transform (FFT). Building on $\S$ 2.8 ➞ we will implement a 1-D radix-2 Cooley-Tukey-based FFT using both decimation in time (DIT) and decimation in frequency (DIF) for an $N = 2^n$ input function.
From $\S$ 2.8.2 ➞ the discrete Fourier transform (DFT) is defined as:
$$ \mathscr{F}{\rm D}{y}_k = Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}, $$
That is, the $k^{th}$ element of the Fourier transformed spectrum $Y$ is a sum over all $n$ elements of the function $y$, each multipled by a complex twiddle factor $e^{-\imath 2\pi \frac{nk}{N}}$. In $\S$ 2.8.5 ➞ two methods for computing the DFT for a size $N = 2^n$ discrete function. A double loop to compute all elements of the Fourier-transformed spectrum, and a matrix multiplication by generating the Fourier kernel $K$. The compute time to perform the DFT is $\mathcal{O}(N^2)$, this is it takes $cN^2$ operations where $c > 1$ is a constant factor. Though as note in $\S$ 2.8.5 ➞ the matrix implementation is much fast that the loop because this algorithm takes advantage of fast vector math libraries.
The DFT code is replicated here as it will be used to compare our implementation of the FFT:
End of explanation
def one_layer_FFT(x):
An implementation of the 1D Cooley-Tukey FFT using one layer
N = x.size
if N%2 > 0:
print "Warning: length of x is not a power of two, returning DFT"
return matrix_DFT(x)
else:
X_even = matrix_DFT(x[::2])
X_odd = matrix_DFT(x[1::2])
factor = np.exp(-2j * np.pi * np.arange(N) / N)
return np.concatenate([X_even + factor[:N / 2] * X_odd, X_even + factor[N / 2:] * X_odd])
Explanation: In $\S$ 2.8.6 ➞ the fast Fourier transform was introduced as using recursion to implement a Fourier transform in $\mathcal{O}(N\log_2N)$ computations, significantly reducing the computational cost of computing the Fourier transform, especially for large $N$. A 'one layer' fast Fourier transform was presented which split the input function into two, and applied the twiddle factor to all values in the layer before calling the matrix-based DFT. This code is replicated below.
End of explanation
x = np.random.random(256) # create random vector to take the DFT of
print np.allclose(loop_DFT(x), matrix_DFT(x)) # returns True if all values are equal (within numerical error)
print np.allclose(matrix_DFT(x), one_layer_FFT(x)) # returns True if all values are equal (within numerical error)
Explanation: We can easily show that each of these functions produce the same results by introducting a discrete test function $x$ and showing that the same results are reported by each function call:
End of explanation
print 'Double Loop DFT:'
%timeit loop_DFT(x)
print '\nMatrix DFT:'
%timeit matrix_DFT(x)
print '\nOne Layer FFT + Matrix DFT:'
%timeit one_layer_FFT(x)
Explanation: We can also time each function to report of the amount of time is takes to return a finished spectrum.
End of explanation
print np.allclose(one_layer_FFT(x), np.fft.fft(x))
print 'numpy FFT:'
%timeit np.fft.fft(x)
Explanation: As we can see the matrix DFT is significatly faster than the double loop DFT, this is because of the fast vectorization functions in numpy. And, the 'one-layer' FFT is about twice as fast as the matrix DFT because of the FFT architecture. We can go one fast and use the built-in numpy FFT:
End of explanation |
15,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical PyTorch
Step1: Creating the Network
This network extends the last tutorial's RNN with an extra argument for the category tensor, which is concatenated along with the others. The category tensor is a one-hot vector just like the letter input.
We will interpret the output as the probability of the next letter. When sampling, the most likely output letter is used as the next input letter.
I added a second linear layer o2o (after combining hidden and output) to give it more muscle to work with. There's also a dropout layer, which randomly zeros parts of its input with a given probability (here 0.1) and is usually used to fuzz inputs to prevent overfitting. Here we're using it towards the end of the network to purposely add some chaos and increase sampling variety.
Step2: Preparing for Training
First of all, helper functions to get random pairs of (category, line)
Step3: For each timestep (that is, for each letter in a training word) the inputs of the network will be (category, current letter, hidden state) and the outputs will be (next letter, next hidden state). So for each training set, we'll need the category, a set of input letters, and a set of output/target letters.
Since we are predicting the next letter from the current letter for each timestep, the letter pairs are groups of consecutive letters from the line - e.g. for "ABCD<EOS>" we would create ("A", "B"), ("B", "C"), ("C", "D"), ("D", "EOS").
The category tensor is a one-hot tensor of size <1 x n_categories>. When training we feed it to the network at every timestep - this is a design choice, it could have been included as part of initial hidden state or some other strategy.
Step4: For convenience during training we'll make a random_training_set function that fetches a random (category, line) pair and turns them into the required (category, input, target) tensors.
Step5: Training the Network
In contrast to classification, where only the last output is used, we are making a prediction at every step, so we are calculating loss at every step.
The magic of autograd allows you to simply sum these losses at each step and call backward at the end. But don't ask me why initializing loss with 0 works.
Step6: To keep track of how long training takes I am adding a time_since(t) function which returns a human readable string
Step7: Training is business as usual - call train a bunch of times and wait a few minutes, printing the current time and loss every print_every epochs, and keeping store of an average loss per plot_every epochs in all_losses for plotting later.
Step8: Plotting the Network
Plotting the historical loss from all_losses shows the network learning
Step9: Sampling the Network
To sample we give the network a letter and ask what the next one is, feed that in as the next letter, and repeat until the EOS token.
Create tensors for input category, starting letter, and empty hidden state
Create a string output_str with the starting letter
Up to a maximum output length,
Feed the current letter to the network
Get the next letter from highest output, and next hidden state
If the letter is EOS, stop here
If a regular letter, add to output_str and continue
Return the final name
Note | Python Code:
import glob
import unicodedata
import string
all_letters = string.ascii_letters + " .,;'-"
n_letters = len(all_letters) + 1 # Plus EOS marker
EOS = n_letters - 1
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicode_to_ascii("O'Néàl"))
# Read a file and split into lines
def read_lines(filename):
lines = open(filename).read().strip().split('\n')
return [unicode_to_ascii(line) for line in lines]
# Build the category_lines dictionary, a list of lines per category
category_lines = {}
all_categories = []
for filename in glob.glob('../data/names/*.txt'):
category = filename.split('/')[-1].split('.')[0]
all_categories.append(category)
lines = read_lines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
print('# categories:', n_categories, all_categories)
Explanation: Practical PyTorch: Generating Names with a Conditional Character-Level RNN
In the last tutorial we used a RNN to classify names into their language of origin. This time we'll turn around and generate names from languages. This model will improve upon the RNN we used to generate Shakespeare one character at a time by adding another input (representing the language) so we can specify what kind of name to generate.
```
python generate.py Russian
Rovakov
Uantov
Shavakov
python generate.py German
Gerren
Ereng
Rosher
python generate.py Spanish
Salla
Parer
Allan
python generate.py Chinese
Chan
Hang
Iun
```
Being able to "prime" the generator with a specific category brings us a step closer to the Sequence to Sequence model used for machine translation.
Recommended Reading
I assume you have at least installed PyTorch, know Python, and understand Tensors:
http://pytorch.org/ For installation instructions
Deep Learning with PyTorch: A 60-minute Blitz to get started with PyTorch in general
jcjohnson's PyTorch examples for an in depth overview
Introduction to PyTorch for former Torchies if you are former Lua Torch user
It would also be useful to know about RNNs and how they work:
The Unreasonable Effectiveness of Recurrent Neural Networks shows a bunch of real life examples
Understanding LSTM Networks is about LSTMs specifically but also informative about RNNs in general
I also suggest the previous tutorials:
Classifying Names with a Character-Level RNN for using an RNN to classify text into categories
Generating Shakespeare with a Character-Level RNN for using an RNN to generate one character at a time
Preparing the Data
See Classifying Names with a Character-Level RNN for more detail - we're using the exact same dataset. In short, there are a bunch of plain text files data/names/[Language].txt with a name per line. We split lines into an array, convert Unicode to ASCII, and end up with a dictionary {language: [names ...]}.
End of explanation
import torch
import torch.nn as nn
from torch.autograd import Variable
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size)
self.o2o = nn.Linear(hidden_size + output_size, output_size)
self.softmax = nn.LogSoftmax()
def forward(self, category, input, hidden):
input_combined = torch.cat((category, input, hidden), 1)
hidden = self.i2h(input_combined)
output = self.i2o(input_combined)
output_combined = torch.cat((hidden, output), 1)
output = self.o2o(output_combined)
return output, hidden
def init_hidden(self):
return Variable(torch.zeros(1, self.hidden_size))
Explanation: Creating the Network
This network extends the last tutorial's RNN with an extra argument for the category tensor, which is concatenated along with the others. The category tensor is a one-hot vector just like the letter input.
We will interpret the output as the probability of the next letter. When sampling, the most likely output letter is used as the next input letter.
I added a second linear layer o2o (after combining hidden and output) to give it more muscle to work with. There's also a dropout layer, which randomly zeros parts of its input with a given probability (here 0.1) and is usually used to fuzz inputs to prevent overfitting. Here we're using it towards the end of the network to purposely add some chaos and increase sampling variety.
End of explanation
import random
# Get a random category and random line from that category
def random_training_pair():
category = random.choice(all_categories)
line = random.choice(category_lines[category])
return category, line
Explanation: Preparing for Training
First of all, helper functions to get random pairs of (category, line):
End of explanation
# One-hot vector for category
def make_category_input(category):
li = all_categories.index(category)
tensor = torch.zeros(1, n_categories)
tensor[0][li] = 1
return Variable(tensor)
# One-hot matrix of first to last letters (not including EOS) for input
def make_chars_input(chars):
tensor = torch.zeros(len(chars), n_letters)
for ci in range(len(chars)):
char = chars[ci]
tensor[ci][all_letters.find(char)] = 1
tensor = tensor.view(-1, 1, n_letters)
return Variable(tensor)
# LongTensor of second letter to end (EOS) for target
def make_target(line):
letter_indexes = [all_letters.find(line[li]) for li in range(1, len(line))]
letter_indexes.append(n_letters - 1) # EOS
tensor = torch.LongTensor(letter_indexes)
return Variable(tensor)
Explanation: For each timestep (that is, for each letter in a training word) the inputs of the network will be (category, current letter, hidden state) and the outputs will be (next letter, next hidden state). So for each training set, we'll need the category, a set of input letters, and a set of output/target letters.
Since we are predicting the next letter from the current letter for each timestep, the letter pairs are groups of consecutive letters from the line - e.g. for "ABCD<EOS>" we would create ("A", "B"), ("B", "C"), ("C", "D"), ("D", "EOS").
The category tensor is a one-hot tensor of size <1 x n_categories>. When training we feed it to the network at every timestep - this is a design choice, it could have been included as part of initial hidden state or some other strategy.
End of explanation
# Make category, input, and target tensors from a random category, line pair
def random_training_set():
category, line = random_training_pair()
category_input = make_category_input(category)
line_input = make_chars_input(line)
line_target = make_target(line)
return category_input, line_input, line_target
Explanation: For convenience during training we'll make a random_training_set function that fetches a random (category, line) pair and turns them into the required (category, input, target) tensors.
End of explanation
def train(category_tensor, input_line_tensor, target_line_tensor):
hidden = rnn.init_hidden()
optimizer.zero_grad()
loss = 0
for i in range(input_line_tensor.size()[0]):
output, hidden = rnn(category_tensor, input_line_tensor[i], hidden)
loss += criterion(output, target_line_tensor[i])
loss.backward()
optimizer.step()
return output, loss.data[0] / input_line_tensor.size()[0]
Explanation: Training the Network
In contrast to classification, where only the last output is used, we are making a prediction at every step, so we are calculating loss at every step.
The magic of autograd allows you to simply sum these losses at each step and call backward at the end. But don't ask me why initializing loss with 0 works.
End of explanation
import time
import math
def time_since(t):
now = time.time()
s = now - t
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
Explanation: To keep track of how long training takes I am adding a time_since(t) function which returns a human readable string:
End of explanation
n_epochs = 100000
print_every = 5000
plot_every = 500
all_losses = []
loss_avg = 0 # Zero every plot_every epochs to keep a running average
learning_rate = 0.0005
rnn = RNN(n_letters, 128, n_letters)
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
start = time.time()
for epoch in range(1, n_epochs + 1):
output, loss = train(*random_training_set())
loss_avg += loss
if epoch % print_every == 0:
print('%s (%d %d%%) %.4f' % (time_since(start), epoch, epoch / n_epochs * 100, loss))
if epoch % plot_every == 0:
all_losses.append(loss_avg / plot_every)
loss_avg = 0
Explanation: Training is business as usual - call train a bunch of times and wait a few minutes, printing the current time and loss every print_every epochs, and keeping store of an average loss per plot_every epochs in all_losses for plotting later.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
plt.figure()
plt.plot(all_losses)
Explanation: Plotting the Network
Plotting the historical loss from all_losses shows the network learning:
End of explanation
max_length = 20
# Generate given a category and starting letter
def generate_one(category, start_char='A', temperature=0.5):
category_input = make_category_input(category)
chars_input = make_chars_input(start_char)
hidden = rnn.init_hidden()
output_str = start_char
for i in range(max_length):
output, hidden = rnn(category_input, chars_input[0], hidden)
# Sample as a multinomial distribution
output_dist = output.data.view(-1).div(temperature).exp()
top_i = torch.multinomial(output_dist, 1)[0]
# Stop at EOS, or add to output_str
if top_i == EOS:
break
else:
char = all_letters[top_i]
output_str += char
chars_input = make_chars_input(char)
return output_str
# Get multiple samples from one category and multiple starting letters
def generate(category, start_chars='ABC'):
for start_char in start_chars:
print(generate_one(category, start_char))
generate('Russian', 'RUS')
generate('German', 'GER')
generate('Spanish', 'SPA')
generate('Chinese', 'CHI')
Explanation: Sampling the Network
To sample we give the network a letter and ask what the next one is, feed that in as the next letter, and repeat until the EOS token.
Create tensors for input category, starting letter, and empty hidden state
Create a string output_str with the starting letter
Up to a maximum output length,
Feed the current letter to the network
Get the next letter from highest output, and next hidden state
If the letter is EOS, stop here
If a regular letter, add to output_str and continue
Return the final name
Note: Rather than supplying a starting letter every time we generate, we could have trained with a "start of string" token and had the network choose its own starting letter.
End of explanation |
15,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
syncID
Step1: Data Institue Participants
Step3: If you get the following message
<p>
<center><strong>ModuleNotFoundError</strong></center>
<img src="No_module_named_gdal.png" style="width
Step4: After running this cell, we can call the function, as below. Note that you need to specify the relative path (as shown here with the ./, indicating that file is saved in your working directory) or the absolute path (eg. D
Step5: We can look at the dimensions of this tile by using the .shape method
Step6: We can list the metadata information as follows
Step7: Next, we'll define a function to plot the array data. Run the cell below
Step8: Now run this function using the inputs you defined earlier
Step9: Lastly, we can plot a histogram of the first band (red), which we can extract by using splicing. Since Python is 0-based, to extract all values of the first band, we can use | Python Code:
import sys
sys.version
Explanation: syncID: 67a5e95e1b7445aca7d7750b75c0ee98
title: "Plotting a NEON RGB Camera Image (GeoTIFF) in Python"
description: "This lesson is a brief introduction to RGB camera images and the GeoTIFF raster format in Python."
dateCreated: 2018-06-30
authors: Bridget Hass,
contributors:
estimatedTime:
packagesLibraries:
topics: data-analysis, data-visualization, spatial-data-gis
languagesTool: python
dataProduct: DP1.0001.01
code1: code/Python/remote-sensing/rgb-camera/plot-neon-rgb-camera-data.ipynb
tutorialSeries: jupyter-notebooks
urlTitle: plot-neon-rgb-py
This tutorial introduces NEON RGB camera images and functions to read in and plot GeoTIFF rasters in Python. In this tutorial, we will read in an RGB camera tile of the NEON Smithsonian Environmental Research Center (SERC) site. We will run the user-defined functions RGBraster2array and plotRGBimage to read in the image as an array, plot an RGB image of this raster, and plot a histogram of the intensities of one of the three bands.
Objectives
After completing this tutorial, you will be able to:
Plot a NEON RGB Camera Tile (Data Product
Plot a histogram of a single band of an RGB Camera Tile
Download the Data
Download the NEON GeoTiFF file of the
<a href="https://neondata.sharefile.com/d-s274babd550a45e7a">camera (RGB) imagery tile</a>
collected over the Smithsonian Environmental Research Station (SERC) NEON field site. Place this data in a location where you know where it is. You will need to know the file path to this data.
Background
As part of the
<a href="https://www.neonscience.org/data-collection/airborne-remote-sensing" target="_blank"> NEON Airborn Operation Platform's</a>
suite of remote sensing instruments, the digital camera producing high-resolution (0.25 m) photographs of the earth’s surface. The camera records light energy that has reflected off the ground in the visible part (red, green and blue) of the light spectrum. Often the camera images are used to provide context for the hyperspectral and LiDAR data.
Note: Don't worry about understanding everything in the raster2array function at this point. If you are curious, we encourage you to read the docstrings, but we will go into more detail during the data institute.
Data Tip: To run a cell you can either select Cell > Run Cells with your cursor in the cell you want to run, or use the shortcut key Shift + Enter. For more handy shortcuts, refer to the tab Help > Keyboard Shortcuts.
Set up Enviornment
First, make sure that you are running the Python 3.5 environment by running the code in the cell below:
End of explanation
import gdal
Explanation: Data Institue Participants: You should be running 3.5.x. If this is not the case, close this console (both the notebook and Home page), and shut down your command prompt that is running your Jupyter notebook. Re-open your command prompt, navigate to your workking directory, and activate your p35 environment by typing activate p35 in Windows or source activate p35 in Mac if you followed the pre-institute computer set-up instructions. Once you see (p35) at the beginning of your command prompt, you can type jupyter notebook to run your notebook.
<p>
<center><strong>Activating `Python 3.5` environment from the command prompt.</strong></center>
<img src="/activate_py35.png" style="width: 600px;"/>
</p>
Other tutorial users: Jupyter Notebooks is not required to complete this tutorial. However, as of June 2018 the GDAL package wasn't fully compatible with Python 3.6 so we recommend using a Python 3.5 environment.
Now that you are in the right environment, first we will import the gdal package, which contains tools for programming and manipulating the Geospatial Data Abstraction Library (GDAL). For more information on GDAL, please refer to <a href="http://www.gdal.org/" target="_blank">gdal.org</a>.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
def RGBraster2array(RGB_geotif):
RGBraster2array reads in a NEON AOP geotif file and returns
a numpy array, and header containing associated metadata with spatial information.
--------
Parameters
RGB_geotif -- full or relative path and name of reflectance hdf5 file
--------
Returns
--------
array:
numpy array of geotif values
metadata:
dictionary containing the following metadata (all strings):
array_rows
array_cols
bands
driver
projection
geotransform
pixelWidth
pixelHeight
extent
noDataValue
scaleFactor
--------
Example Execution:
--------
RGB_geotif = '2017_SERC_2_368000_4306000_image.tif'
RGBcam_array, RGBcam_metadata = RGBraster2array(RGB_geotif)
metadata = {}
dataset = gdal.Open(RGB_geotif)
metadata['array_rows'] = dataset.RasterYSize
metadata['array_cols'] = dataset.RasterXSize
metadata['bands'] = dataset.RasterCount
metadata['driver'] = dataset.GetDriver().LongName
metadata['projection'] = dataset.GetProjection()
metadata['geotransform'] = dataset.GetGeoTransform()
mapinfo = dataset.GetGeoTransform()
metadata['pixelWidth'] = mapinfo[1]
metadata['pixelHeight'] = mapinfo[5]
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = mapinfo[0]
metadata['ext_dict']['xMax'] = mapinfo[0] + dataset.RasterXSize/mapinfo[1]
metadata['ext_dict']['yMin'] = mapinfo[3] + dataset.RasterYSize/mapinfo[5]
metadata['ext_dict']['yMax'] = mapinfo[3]
metadata['extent'] = (metadata['ext_dict']['xMin'],metadata['ext_dict']['xMax'],
metadata['ext_dict']['yMin'],metadata['ext_dict']['yMax'])
raster = dataset.GetRasterBand(1)
array_shape = raster.ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float).shape
metadata['noDataValue'] = raster.GetNoDataValue()
metadata['scaleFactor'] = raster.GetScale()
array = np.zeros((array_shape[0],array_shape[1],dataset.RasterCount),'uint8') #pre-allocate stackedArray matrix
for i in range(1, dataset.RasterCount+1):
band = dataset.GetRasterBand(i).ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float)
band[band==metadata['noDataValue']]=np.nan
band = band/metadata['scaleFactor']
array[...,i-1] = band
return array, metadata
Explanation: If you get the following message
<p>
<center><strong>ModuleNotFoundError</strong></center>
<img src="No_module_named_gdal.png" style="width: 600px;"/>
</p>
Troubleshooting steps --> try one of the following:
- from a Jupyter Python cell, run the command:
!conda install gdal
- from a Command Prompt (Windows) or Terminal (Mac), activate the appropriate environment
Next we will import the numpy and matplotlib packages. Numpy stands for Numerical Python This is a standard package that comes with the Anaconda installation of Python, so you should not need to do any additional steps to install it.
End of explanation
RGB_geotif = './2017_SERC_2_368000_4306000_image.tif'
SERC_RGBcam_array, SERC_RGBcam_metadata = RGBraster2array(RGB_geotif)
Explanation: After running this cell, we can call the function, as below. Note that you need to specify the relative path (as shown here with the ./, indicating that file is saved in your working directory) or the absolute path (eg. D:\\RSDI_2018\\data) - you'll need to use double slashes to indicate that you are pointing to a directory. Please use the correct file path to where you saved the GeoTIFF file downloaded at the befining of the lesson.
End of explanation
SERC_RGBcam_array.shape
Explanation: We can look at the dimensions of this tile by using the .shape method:
End of explanation
#Display information stored in header
for key in sorted(SERC_RGBcam_metadata.keys()):
print(key)
Explanation: We can list the metadata information as follows:
End of explanation
def plot_band_array(band_array,
refl_extent,
colorlimit,
ax=plt.gca(),
title='',
cbar ='on',
cmap_title='',
colormap='spectral'):
'''plot_band_array reads in and plots a single band or an rgb band combination of a reflectance array
--------
Parameters
--------
band_array: flightline array of reflectance values, created from h5refl2array function
refl_extent: extent of reflectance data to be plotted (xMin, xMax, yMin, yMax) - use metadata['extent'] from h5refl2array function
colorlimit: range of values to plot (min,max). Best to look at the histogram of reflectance values before plotting to determine colorlimit.
ax: optional, default = current axis
title: string, optional; plot title
cmap_title: string, optional; colorbar title
colormap: string, optional; see https://matplotlib.org/examples/color/colormaps_reference.html for list of colormaps
--------
Returns
plots array of single band or RGB if given a 3-band
--------
Example:
--------
plot_band_array(SERC_RGBcam_array,
SERC_RGBcam_metadata['extent'],
(1,255),
title='SERC RGB Camera Tile',
cbar='off')'''
plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit);
if cbar == 'on':
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20)
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
Explanation: Next, we'll define a function to plot the array data. Run the cell below:
End of explanation
plot_band_array(SERC_RGBcam_array,
SERC_RGBcam_metadata['extent'],
(1,255),
title='SERC RGB Camera Tile',
cbar='off')
Explanation: Now run this function using the inputs you defined earlier:
End of explanation
plt.hist(np.ravel(SERC_RGBcam_array[:,:,0]),20);
plt.title('Histogram of SERC Camera Red Band')
plt.xlabel('Brightness'); plt.ylabel('Frequency')
Explanation: Lastly, we can plot a histogram of the first band (red), which we can extract by using splicing. Since Python is 0-based, to extract all values of the first band, we can use: SERC_RGBcam_array[:,:,0]. Notes: It speeds up the algorithm to flatten the 2-D array into one dimension using numpy.ravel; 20 specifies the number of bins.
End of explanation |
15,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Start
Step1: Import the aggregation object from the module.
Step2: Create a few objects with various depths (number of moments) and widths (number of columns to compute statistics for). Here the stats1 and stats3 objects each accumulate two moments for a single column of data, and the stats2 object collects 4 statistical moments for 4 columns of data.
Step3: Add individual data values to the single column accumulation of the stats1 object. Print the object to view its state, which includes the moment values so far accumulated. Also, print the list of lists returned from the statistics() method call. Here you can see that the mean is 2.0 and the variance is 0.0.
Step4: Add entire rows (multiple columns) of values to the stats2 object. View the accumulated results. Note that when the second moment (n * Var) is 0, equivalent to a deviation of 0, the higher moments are left in there initial 0 state. The higher statistics are set to a NaN value in this case.
Step5: Remove data (UNIMPLEMENTED) from the stats2 object.
Step6: Load the stats3 object with with data and view the results.
Step7: Now aggregate that object onto the first. This only works when the shapes are the same. | Python Code:
from __future__ import print_function
Explanation: Quick Start
End of explanation
from pebaystats import dstats
Explanation: Import the aggregation object from the module.
End of explanation
stats1 = dstats(2,1)
stats2 = dstats(4,4)
stats3 = dstats(2,1)
Explanation: Create a few objects with various depths (number of moments) and widths (number of columns to compute statistics for). Here the stats1 and stats3 objects each accumulate two moments for a single column of data, and the stats2 object collects 4 statistical moments for 4 columns of data.
End of explanation
stats1.add(2)
stats1.add(2)
stats1.add(2)
print('stats1: %s' % stats1)
print('statistics: %s' % stats1.statistics())
Explanation: Add individual data values to the single column accumulation of the stats1 object. Print the object to view its state, which includes the moment values so far accumulated. Also, print the list of lists returned from the statistics() method call. Here you can see that the mean is 2.0 and the variance is 0.0.
End of explanation
stats2.add([1.2,2,3,9])
stats2.add([4.5,6,7,9])
stats2.add([8.9,0,1,9])
stats2.add([2.3,4,5,9])
print('stats2: %s' % stats2)
print('statistics: %s' % stats2.statistics(True))
Explanation: Add entire rows (multiple columns) of values to the stats2 object. View the accumulated results. Note that when the second moment (n * Var) is 0, equivalent to a deviation of 0, the higher moments are left in there initial 0 state. The higher statistics are set to a NaN value in this case.
End of explanation
# stats2.remove(1.2,2,3,9)
Explanation: Remove data (UNIMPLEMENTED) from the stats2 object.
End of explanation
stats3.add(4)
stats3.add(4)
stats3.add(4)
print('stats3: %s' % stats3)
print('statistics: %s' % stats3.statistics())
Explanation: Load the stats3 object with with data and view the results.
End of explanation
stats1.aggregate(stats3)
print('stast1: %s' % stats1)
print('statistics: %s' % stats1.statistics(True))
Explanation: Now aggregate that object onto the first. This only works when the shapes are the same.
End of explanation |
15,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3D Grid on GPU with Kernel Tuner
In this tutorial we are going to see how to map a series of Gaussian functions, each located at a different point on a 3D a grid. We are going to optimize the GPU code and compare its performance with the CPU implementation.
<div class="alert alert-info">
**Note
Step1: For a given center, this function returns the values of the corresponding Gaussian function mapped on the 3D grid. The grid points are here defined by the variables xgrid, ygrid and zgrid. These variables are themselves 3D grids obtained, as we will see in an instant, using the numpy.meshgrid function.
To use this function we simply have to create the grid, defined by the vectors x, y, and z. Since we want to later on send these vectors to the GPU we define them as 32-bit floats. For simplicity, we here select the interval $[-1
Step3: Depending on your hardware it might take a few seconds for the calculations above to finish.
Let's move to the GPU
Let's see now how that will look like on the GPU. We first write a kernel that does the same calculation as the above function. As you can see see below, the variables block_size_x, block_size_y and block_size_z are not yet defined here. These variables are used to set the number of threads per thread block on the GPU and are the main parameters that we will optimize in this tutorial. During tuning, Kernel Tuner will automatically insert #define statements for these parameters at the top of the kernel code. So for now we don't have to specify their values.
The dimensions of the problem nx, ny, and nz, are the number of grid points in the x, y, and z dimensions. We can again use Kernel Tuner to insert these parameters into the code.
Step4: Tune the kernel
We can now use the tuner to optimize the thread block dimensions on our GPU. To do so we define the tunable parameters of our kernel using the tune_params dictionary, which assigns to each block size the values we want the tuner to explore. We also use the tunable parameters to insert the domain dimensions nx, ny, and nz.
We also define a list containing the arguments of the CUDA function (AddGrid) above. Since we only want to optimize the performance of the kernel we only consider here one center in the middle of the grid. Note that Kernel Tuner needs either numpy.ndarray or numpy.scalar as arguments of the kernel. Hence we need to be specific on the types of the Gaussians positions.
Step5: As mentioned earlier, the tuner will automatically insert #define statements at the top of the kernel to define the block sizes and domain dimensions, so we don't need to specify them here. Then, we simply call the tune_kernel function.
Step6: The tune_kernel function explores all the possible combinations of tunable parameters (here only the block size). For each possible kernel configuration, the tuner compiles the code and its measures execution time (by default using 7 iterations). At the end of the the run, the tune_kernel outputs the optimal combination of the tunable parameters. But the measured execution time of all benchmarked kernels is also returned by tune_kernel for programmatic access to the data.
As you can see the range of performances is quite large. With our GPU (GeForce GTX 1080 Ti) we obtained a maximum time of 5.30 ms and minimum one of 0.84 ms. The performance of the kernel varies by a factor 6 depending on the thread block size!
Using the optimized parameters
Now that we have determined which parameters are the best suited for our application we can specify them in our kernel and run it. In our case, the optimal grid size determined by the tuner were block_size_x = 4, block_size_y = 2, block_size_z=16. We therefore use these parameters here to define the block size. The grid size is simply obtained by dividing the dimension of the problem by the corresponding block size.
Step7: Before using the kernel we need to specify the block size in its definition. There are different ways of doing this, we here simply replace the block_size_x, block_size_y and block_size_z by their values determined by the tuner. In order to do that we create a dictionary that associates the name of the block size and their values and simply make the substitution. Once the block size are specified, we can compile the kernel ourselves and get the function.
Step8: We now have to manually create the gpuarrays that correspond to the vector x, y and z as well as the 3D grid. Once all these are defined we can call the addgrid function using the gpuarrays and the block and grid size in argument. We also time the execution to compare it with the one outputed by the kernel tuner. Note that we exlicitly synchronize the CPU and GPU to obtain an accurate timing. | Python Code:
import numpy as np
import numpy.linalg as la
from time import time
def compute_grid(center,xgrid,ygrid,zgrid):
x0,y0,z0 = center
beta = -0.1
f = np.sqrt( (xgrid-x0)**2 + (ygrid-y0)**2 + (zgrid-z0)**2 )
f = np.exp(beta*f)
return f
Explanation: 3D Grid on GPU with Kernel Tuner
In this tutorial we are going to see how to map a series of Gaussian functions, each located at a different point on a 3D a grid. We are going to optimize the GPU code and compare its performance with the CPU implementation.
<div class="alert alert-info">
**Note:** If you are reading this tutorial on the Kernel Tuner's documentation pages, note that you can actually run this tutorial as a Jupyter Notebook. Just clone the Kernel Tuner's [GitHub repository](http://github.com/benvanwerkhoven/kernel_tuner). Install the Kernel Tuner and Jupyter Notebooks and you're ready to go! You can start the tutorial by typing "jupyter notebook" in the "kernel_tuner/tutorial" directory.
</div>
Let's start on the CPU
Before delving into the GPU implementation, let's start with a simple CPU implementation of the problem. The problem at hand is to compute the values of the following function
\begin{equation} \nonumber
f = \sum_{i=1}^{N}\exp\left(-\beta \sqrt{(x-x_i)^2+(y-y_i)^2+(z-z_i)^2}\right)
\end{equation}
on a 3d grid. The $x$, $y$ and $z$ vectors contain the coordinate of the points in the Cartesian space. We can define a simple Python function that computes the value of the function $f$ for one given Gaussian. Don't forget to execute all the code cells, like the one below, as you read through this notebook by selecting the cell and pressing shift+enter.
End of explanation
# dimension of the problem
n = 256
# define the vectors
x = np.linspace(-1,1,n).astype(np.float32)
y = np.linspace(-1,1,n).astype(np.float32)
z = np.linspace(-1,1,n).astype(np.float32)
# create meshgrids
xgrid,ygrid,zgrid = np.meshgrid(x,y,z)
cpu_grid = np.zeros_like(xgrid)
# centers
npts = 100
center = (-1 + 2*np.random.rand(npts,3)).astype(np.float32)
# compute the grid and time the operation
t0 = time()
for xyz in center:
cpu_grid += compute_grid(xyz,xgrid,ygrid,zgrid)
print('CPU Execution time %f ms' %( (time()-t0)*1000) )
Explanation: For a given center, this function returns the values of the corresponding Gaussian function mapped on the 3D grid. The grid points are here defined by the variables xgrid, ygrid and zgrid. These variables are themselves 3D grids obtained, as we will see in an instant, using the numpy.meshgrid function.
To use this function we simply have to create the grid, defined by the vectors x, y, and z. Since we want to later on send these vectors to the GPU we define them as 32-bit floats. For simplicity, we here select the interval $[-1:1]$ to define our grid. We use $n=256$ grid points in order to have a sufficiently large problem without requiring too long calculations. We then create meshgrids to be passed to the function above. We define here 100 gaussian centers that are randomly distributed within the 3D space.
End of explanation
# define a kernel template
# several parameters are available
# block sizes : bx, by, bz
# dimensions : nx, ny, nz
kernel_code =
#include <math.h>
// a simple gaussian function
__host__ __device__ float f(float d){
float b = 0.1;
float x = exp(-b*d);
return x;
}
// the main function called below
__global__ void AddGrid(float x0, float y0, float z0, float *xvect, float *yvect, float *zvect, float *out)
{
// 3D thread
int x = threadIdx.x + block_size_x * blockIdx.x;
int y = threadIdx.y + block_size_y * blockIdx.y;
int z = threadIdx.z + block_size_z * blockIdx.z;
if ( ( x < nx ) && (y < ny) && (z < nz) )
{
float dx = xvect[x]-x0;
float dy = yvect[y]-y0;
float dz = zvect[z]-z0;
float d = sqrt(dx*dx + dy*dy + dz*dz);
out[y * nx * nz + x * nz + z] = f(d);
}
}
Explanation: Depending on your hardware it might take a few seconds for the calculations above to finish.
Let's move to the GPU
Let's see now how that will look like on the GPU. We first write a kernel that does the same calculation as the above function. As you can see see below, the variables block_size_x, block_size_y and block_size_z are not yet defined here. These variables are used to set the number of threads per thread block on the GPU and are the main parameters that we will optimize in this tutorial. During tuning, Kernel Tuner will automatically insert #define statements for these parameters at the top of the kernel code. So for now we don't have to specify their values.
The dimensions of the problem nx, ny, and nz, are the number of grid points in the x, y, and z dimensions. We can again use Kernel Tuner to insert these parameters into the code.
End of explanation
from collections import OrderedDict
from kernel_tuner import tune_kernel
# create the dictionary containing the tune parameters
tune_params = OrderedDict()
tune_params['block_size_x'] = [2,4,8,16,32]
tune_params['block_size_y'] = [2,4,8,16,32]
tune_params['block_size_z'] = [2,4,8,16,32]
tune_params['nx'] = [n]
tune_params['ny'] = [n]
tune_params['nz'] = [n]
# define the final grid
grid = np.zeros_like(xgrid)
# arguments of the CUDA function
x0,y0,z0 = np.float32(0),np.float32(0),np.float32(0)
args = [x0,y0,z0,x,y,z,grid]
# dimensionality
problem_size = (n,n,n)
Explanation: Tune the kernel
We can now use the tuner to optimize the thread block dimensions on our GPU. To do so we define the tunable parameters of our kernel using the tune_params dictionary, which assigns to each block size the values we want the tuner to explore. We also use the tunable parameters to insert the domain dimensions nx, ny, and nz.
We also define a list containing the arguments of the CUDA function (AddGrid) above. Since we only want to optimize the performance of the kernel we only consider here one center in the middle of the grid. Note that Kernel Tuner needs either numpy.ndarray or numpy.scalar as arguments of the kernel. Hence we need to be specific on the types of the Gaussians positions.
End of explanation
# call the kernel tuner
result = tune_kernel('AddGrid', kernel_code, problem_size, args, tune_params)
Explanation: As mentioned earlier, the tuner will automatically insert #define statements at the top of the kernel to define the block sizes and domain dimensions, so we don't need to specify them here. Then, we simply call the tune_kernel function.
End of explanation
from pycuda import driver, compiler, gpuarray, tools
import pycuda.autoinit
# optimal values of the block size
block = [4, 2, 16]
# corresponding grid size
grid_dim = [int(np.ceil(n/b)) for b, n in zip(block, problem_size)]
Explanation: The tune_kernel function explores all the possible combinations of tunable parameters (here only the block size). For each possible kernel configuration, the tuner compiles the code and its measures execution time (by default using 7 iterations). At the end of the the run, the tune_kernel outputs the optimal combination of the tunable parameters. But the measured execution time of all benchmarked kernels is also returned by tune_kernel for programmatic access to the data.
As you can see the range of performances is quite large. With our GPU (GeForce GTX 1080 Ti) we obtained a maximum time of 5.30 ms and minimum one of 0.84 ms. The performance of the kernel varies by a factor 6 depending on the thread block size!
Using the optimized parameters
Now that we have determined which parameters are the best suited for our application we can specify them in our kernel and run it. In our case, the optimal grid size determined by the tuner were block_size_x = 4, block_size_y = 2, block_size_z=16. We therefore use these parameters here to define the block size. The grid size is simply obtained by dividing the dimension of the problem by the corresponding block size.
End of explanation
# change the values of the block sizes in the kernel
fixed_params = OrderedDict()
fixed_params['block_size_x'] = block[0]
fixed_params['block_size_y'] = block[1]
fixed_params['block_size_z'] = block[2]
fixed_params['nx'] = n
fixed_params['ny'] = n
fixed_params['nz'] = n
for k,v in fixed_params.items():
kernel_code = kernel_code.replace(k,str(v))
# compile the kernel_code and extract the function
mod = compiler.SourceModule(kernel_code)
addgrid = mod.get_function('AddGrid')
Explanation: Before using the kernel we need to specify the block size in its definition. There are different ways of doing this, we here simply replace the block_size_x, block_size_y and block_size_z by their values determined by the tuner. In order to do that we create a dictionary that associates the name of the block size and their values and simply make the substitution. Once the block size are specified, we can compile the kernel ourselves and get the function.
End of explanation
# create the gpu arrays
xgpu = gpuarray.to_gpu(x)
ygpu = gpuarray.to_gpu(y)
zgpu = gpuarray.to_gpu(z)
grid_gpu = gpuarray.zeros((n,n,n), np.float32)
# compute the grid and time the performance
t0 = time()
for xyz in center:
x0,y0,z0 = xyz
addgrid(x0,y0,z0,xgpu,ygpu,zgpu,grid_gpu,block = tuple(block),grid=tuple(grid_dim))
driver.Context.synchronize()
print('Final GPU time : %f ms' %((time()-t0)*1000))
Explanation: We now have to manually create the gpuarrays that correspond to the vector x, y and z as well as the 3D grid. Once all these are defined we can call the addgrid function using the gpuarrays and the block and grid size in argument. We also time the execution to compare it with the one outputed by the kernel tuner. Note that we exlicitly synchronize the CPU and GPU to obtain an accurate timing.
End of explanation |
15,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From VHF sources to lightning flashes
Using space-time criteria, we will group LMA data into flashes. We will also create a 2D gridded version of these flash data to look at VHF source density, flash extent density, and average flash area.
Background reading
Step1: Configurable Parameters
These are the parameters that will be used by the flash sorting and gridding algorithms - it's a fairly small list.
These parameters are the ones we would expect to adjust when controlling for noise. We also require that a flash have more than one point!
Step2: Read VHF source data and identify flashes
Includes filtering by flash event count.
Step3: Contents of the flash-sorted LMA data structure.
Once the cell above finishes running, you'll see that our LMA data structure has grown. There is a new number_of_flashes dimension, and associated flash data variables. We still have all of the events, but they have been filtered to be just the ones that meet the stations and chi_2 criteria.
Perhaps the most important two variables in terms of understanding the data are the event_parent_flash_id and flash_id variables. Their dimensions are number_of_events and number of flashes, respectively. Each flash has a unique flash_id, an unsigned integer, and then each event in that flash is labled with that integer. Therefore, those two variables define which events go with which flashes, and let us pair up flash-level statstics (such as flash_area) with the events used to calculate those statistics.
Step4: Let's see how many events are in our flashes. We can also check for an expected correlation
Step5: Plot the VHF event data
xlma-python has built-in plotting capabilities to make a standard plot style that has a plan view, two vertical projections, and a time height view of the events.
We don't actually need the flash-sorted data, but it doesn't hurt to have it in the dataset.
Step6: Saving the data … or not
At this stage we could save the flash-sorted data to a NetCDF file. However, we're going to pass on that step right now, in favor of adding a basic gridded dataset to our saved file.
Step7: Above, we set latlon_grid = False; instead, let's use the 500 m stereographic grid defined by Stony Brook University's cell-tracking team. It's defined in lmalibtracer and was imported above. The definition is not too complicated - it only requires specifying the radious of a spherical earth and a coordinate center location in addition. All the other coordinate transformations are handled for us by pyproj.
The next block of code is a bit long, mainly beause we have repeated the blocks for defining the grid two ways. Note that we could uncomment a few lines and create a 3D grid, too. But it's useful to have flash extent density as a 2D instead of 3D analysis for quick-look visualizations, and it's also sufficient for most of our science.
Step8: Looking at the gridded data
Once the cells above have been run, a new data structure (both_ds) will have the events, flashes, and their gridded versions. Note the new dimensions for the center and edges of the grid boxes (grid_time
Step9: Add widget interactivity
It's a bit tedious to change the time index manually. Let'sInteract!. It's possible to build much nicer interfaces, but this is enough for a quick look.
Step10: Aggregating in time
With these low flash rates, it's hard to get a sense of everything in the dataset, so let's sum across the time dimension. (xarray also has a rolling window function.)
Step11: Finally, write the data.
Once we save the data to disk, we can reload the data an re-run any of the plots above without reprocessing everything. We'll make files like this from the post-processed LMA data for each day during ESCAPE/TRACER, and they will be one of our data deliverables to the NCAR EOL catalog, in accordance with the ESCAPE data management plan as proposed in the grant. | Python Code:
import glob
import numpy as np
import datetime
import xarray as xr
import pandas as pd
import pyproj as proj4
from pyxlma.lmalib.io import read as lma_read
from pyxlma.lmalib.flash.cluster import cluster_flashes
from pyxlma.lmalib.flash.properties import flash_stats, filter_flashes
from pyxlma.lmalib.grid import create_regular_grid, assign_regular_bins, events_to_grid
from pyxlma.plot.xlma_plot_feature import color_by_time, plot_points, setup_hist, plot_3d_grid, subset
from pyxlma.plot.xlma_base_plot import subplot_labels, inset_view, BlankPlot
import sys, glob
from lmalibtracer.coords.sbu import get_sbu_proj as get_coord_proj
%matplotlib widget
import matplotlib.pyplot as plt
Explanation: From VHF sources to lightning flashes
Using space-time criteria, we will group LMA data into flashes. We will also create a 2D gridded version of these flash data to look at VHF source density, flash extent density, and average flash area.
Background reading: Bruning and MacGorman (2013, JAS) show how flash area is defined from the convex hull of the VHF point sources (Fig. 2) and show what flash extent density and average flash area look like for a supercell (Fig. 4). We'll use the same definitions here.
End of explanation
filenames = glob.glob('/data/Houston/130619/LYLOUT_130619_2[0-1]*.dat.gz')
# Adjust this to match the length of the dataset we read in. It is used to set up
# the total duration of the gridding along the time dimension.
duration_min = 120
# Source to flash
chi2max = 1.0
stationsmin = 6
min_events_per_flash = 5
# There is a parameter to change the gridding from the 1 min default time resolution.
grid_time_delta_sec = 60
resolution_m = 1000
latlon_grid=False # False uses the stereographic coordinate grid.
Explanation: Configurable Parameters
These are the parameters that will be used by the flash sorting and gridding algorithms - it's a fairly small list.
These parameters are the ones we would expect to adjust when controlling for noise. We also require that a flash have more than one point!
End of explanation
print("Reading files")
lma_data, starttime = lma_read.dataset(filenames)
good_events = (lma_data.event_stations >= stationsmin) & (lma_data.event_chi2 <= chi2max)
lma_data = lma_data[{'number_of_events':good_events}]
dttuple = [starttime, starttime+datetime.timedelta(minutes=duration_min)]
# dttuple = lma_data.Datetime.min(), lma_data.Datetime.max()
tstring = 'LMA {}-{}'.format(dttuple[0].strftime('%H%M'),
dttuple[1].strftime('%H%M UTC %d %B %Y '))
print(tstring)
print("Clustering flashes")
ds = cluster_flashes(lma_data)
print("Calculating flash stats")
ds = flash_stats(ds)
ds = filter_flashes(ds, flash_event_count=(min_events_per_flash, None))
# ds0 = ds.copy()
print(ds)
Explanation: Read VHF source data and identify flashes
Includes filtering by flash event count.
End of explanation
print(ds.event_parent_flash_id)
print('-----')
print('-----')
print(ds.flash_id)
Explanation: Contents of the flash-sorted LMA data structure.
Once the cell above finishes running, you'll see that our LMA data structure has grown. There is a new number_of_flashes dimension, and associated flash data variables. We still have all of the events, but they have been filtered to be just the ones that meet the stations and chi_2 criteria.
Perhaps the most important two variables in terms of understanding the data are the event_parent_flash_id and flash_id variables. Their dimensions are number_of_events and number of flashes, respectively. Each flash has a unique flash_id, an unsigned integer, and then each event in that flash is labled with that integer. Therefore, those two variables define which events go with which flashes, and let us pair up flash-level statstics (such as flash_area) with the events used to calculate those statistics.
End of explanation
event_count_bins = 2**np.arange(12)-0.5
print(event_count_bins.astype(int))
fig, ax = plt.subplots(1,1)
art = ds.flash_event_count.plot.hist(bins=event_count_bins, ax=ax)
ax.semilogx()
event_count_bins = 2**np.arange(12)-0.5
print(event_count_bins.astype(int))
fig, ax = plt.subplots(1,1)
art = ds.plot.scatter('flash_event_count', 'flash_area', marker='s', s=1, ax=ax)
ax.semilogx()
Explanation: Let's see how many events are in our flashes. We can also check for an expected correlation: do the number of events increase with flash area?
End of explanation
alt_data = ds.event_altitude.values/1000.0
lon_data = ds.event_longitude.values
lat_data = ds.event_latitude.values
time_data = pd.Series(ds.event_time) # because time comparisons
chi_data = ds.event_chi2.values
station_data = ds.event_stations.values
# Plot color map and marker size
plot_cmap = 'plasma'
plot_s = 5
tlim_sub = [pd.to_datetime(starttime), pd.to_datetime(pd.to_datetime(starttime) + np.asarray(60, 'timedelta64[m]'))]
tstring = 'LMA {}-{}'.format(tlim_sub[0].strftime('%H%M'),
tlim_sub[1].strftime('%H%M UTC %d %B %Y '))
clat, clon = float(lma_data.network_center_latitude), float(lma_data.network_center_longitude)
xlim = [clon-0.75, clon+0.75]
ylim = [clat-0.75, clat+0.75]
zlim = [0, 21]
xchi = 1.0
stationmin = 6.0
lon_set, lat_set, alt_set, time_set, selection = subset(
lon_data, lat_data, alt_data, time_data, chi_data, station_data,
xlim, ylim, zlim, tlim_sub, xchi, stationmin)
bk_plot = BlankPlot(pd.to_datetime(tlim_sub[0]), bkgmap=True,
xlim=xlim, ylim=ylim, zlim=zlim, tlim=tlim_sub, title=tstring)
# Add a view of where the subset is
xdiv = ydiv = 0.1
inset_view(bk_plot, lon_data, lat_data, xlim, ylim, xdiv, ydiv,
buffer=0.5, inset_size=0.15, plot_cmap = 'plasma', bkgmap = True)
# Add some subplot labels
subplot_labels(bk_plot)
# Add a range ring
bk_plot.ax_plan.tissot(rad_km=40.0, lons=clon, lats=clat, n_samples=80,
facecolor='none',edgecolor='k')
# Add the station locations
stn_art = bk_plot.ax_plan.plot(lma_data['station_longitude'],
lma_data['station_latitude'], 'wD', mec='k', ms=5)
if len(lon_set)==0:
bk_plot.ax_hist.text(0.02,1,'No Sources',fontsize=12)
else:
plot_vmin, plot_vmax, plot_c = color_by_time(time_set, tlim_sub)
plot_points(bk_plot, lon_set, lat_set, alt_set, time_set,
plot_cmap, plot_s, plot_vmin, plot_vmax, plot_c)
plt.show()
# We can save a publication-ready plot using this line … and you can change to .pdf to get a vector plot.
# plt.savefig('./images/' + dttuple[0].strftime('%y%m%d') +
# '/relampago_points_' + dttuple[0].strftime('%Y%m%d_%H%M.png'))
Explanation: Plot the VHF event data
xlma-python has built-in plotting capabilities to make a standard plot style that has a plan view, two vertical projections, and a time height view of the events.
We don't actually need the flash-sorted data, but it doesn't hurt to have it in the dataset.
End of explanation
if False:
print("Writing data")
duration_sec = (dttuple[1]-dttuple[0]).total_seconds()
date_fmt = "LYLOUT_%y%m%d_%H%M%S_{0:04d}_flash.nc".format(int(duration_sec))
outfile = dttuple[0].strftime(date_fmt)
# Compress the variables.
comp = dict(zlib=True, complevel=5)
encoding = {var: comp for var in ds.data_vars}
ds.to_netcdf(outfile, encoding=encoding)
Explanation: Saving the data … or not
At this stage we could save the flash-sorted data to a NetCDF file. However, we're going to pass on that step right now, in favor of adding a basic gridded dataset to our saved file.
End of explanation
print("Setting up grid spec")
grid_dt = np.asarray(grid_time_delta_sec, dtype='m8[s]')
grid_t0 = np.asarray(dttuple[0]).astype('datetime64[ns]')
grid_t1 = np.asarray(dttuple[1]).astype('datetime64[ns]')
time_range = (grid_t0, grid_t1+grid_dt, grid_dt)
# Change the dictionaries below to a consistent set of coordinates
# and adjust grid_spatial_coords in the call to events_to_grid to
# change what is gridded (various time series of 1D, 2D, 3D grids)
if latlon_grid:
# Houston
# center = 29.7600000, -95.3700000
lat_range = (27.75, 31.75, 0.025)
lon_range = (-97.37, -93.37, 0.025)
alt_range = (0, 18e3, 1.0e3)
grid_edge_ranges ={
'grid_latitude_edge':lat_range,
'grid_longitude_edge':lon_range,
# 'grid_altitude_edge':alt_range,
'grid_time_edge':time_range,
}
grid_center_names ={
'grid_latitude_edge':'grid_latitude',
'grid_longitude_edge':'grid_longitude',
# 'grid_altitude_edge':'grid_altitude',
'grid_time_edge':'grid_time',
}
event_coord_names = {
'event_latitude':'grid_latitude_edge',
'event_longitude':'grid_longitude_edge',
# 'event_altitude':'grid_altitude_edge',
'event_time':'grid_time_edge',
}
flash_ctr_names = {
'flash_init_latitude':'grid_latitude_edge',
'flash_init_longitude':'grid_longitude_edge',
# 'flash_init_altitude':'grid_altitude_edge',
'flash_time_start':'grid_time_edge',
}
flash_init_names = {
'flash_center_latitude':'grid_latitude_edge',
'flash_center_longitude':'grid_longitude_edge',
# 'flash_center_altitude':'grid_altitude_edge',
'flash_time_start':'grid_time_edge',
}
else:
# Project lon, lat to SBU map projection
sbu_lla, sbu_map, x_edge, y_edge = get_coord_proj()
sbu_dx = x_edge[1] - x_edge[0]
sbu_dy = y_edge[1] - y_edge[0]
lma_sbu_xratio = resolution_m/sbu_dx
lma_sbu_yratio = resolution_m/sbu_dy
trnsf_to_map = proj4.Transformer.from_crs(sbu_lla, sbu_map)
trnsf_from_map = proj4.Transformer.from_crs(sbu_map, sbu_lla)
lmax, lmay = trnsf_to_map.transform(#sbu_lla, sbu_map,
ds.event_longitude.data,
ds.event_latitude.data)
lma_initx, lma_inity = trnsf_to_map.transform(#sbu_lla, sbu_map,
ds.flash_init_longitude.data,
ds.flash_init_latitude.data)
lma_ctrx, lma_ctry = trnsf_to_map.transform(#sbu_lla, sbu_map,
ds.flash_center_longitude.data,
ds.flash_center_latitude.data)
ds['event_x'] = xr.DataArray(lmax, dims='number_of_events')
ds['event_y'] = xr.DataArray(lmay, dims='number_of_events')
ds['flash_init_x'] = xr.DataArray(lma_initx, dims='number_of_flashes')
ds['flash_init_y'] = xr.DataArray(lma_inity, dims='number_of_flashes')
ds['flash_ctr_x'] = xr.DataArray(lma_ctrx, dims='number_of_flashes')
ds['flash_ctr_y'] = xr.DataArray(lma_ctry, dims='number_of_flashes')
grid_edge_ranges ={
'grid_x_edge':(x_edge[0],x_edge[-1]+.001,sbu_dx*lma_sbu_xratio),
'grid_y_edge':(y_edge[0],y_edge[-1]+.001,sbu_dy*lma_sbu_yratio),
# 'grid_altitude_edge':alt_range,
'grid_time_edge':time_range,
}
grid_center_names ={
'grid_x_edge':'grid_x',
'grid_y_edge':'grid_y',
# 'grid_altitude_edge':'grid_altitude',
'grid_time_edge':'grid_time',
}
event_coord_names = {
'event_x':'grid_x_edge',
'event_y':'grid_y_edge',
# 'event_altitude':'grid_altitude_edge',
'event_time':'grid_time_edge',
}
flash_ctr_names = {
'flash_init_x':'grid_x_edge',
'flash_init_y':'grid_y_edge',
# 'flash_init_altitude':'grid_altitude_edge',
'flash_time_start':'grid_time_edge',
}
flash_init_names = {
'flash_ctr_x':'grid_x_edge',
'flash_ctr_y':'grid_y_edge',
# 'flash_center_altitude':'grid_altitude_edge',
'flash_time_start':'grid_time_edge',
}
print("Creating regular grid")
grid_ds = create_regular_grid(grid_edge_ranges, grid_center_names)
if latlon_grid:
pass
else:
ctrx, ctry = np.meshgrid(grid_ds.grid_x, grid_ds.grid_y)
hlon, hlat = trnsf_from_map.transform(ctrx, ctry)
# Add lon lat to the dataset, too.
ds['lon'] = xr.DataArray(hlon, dims=['grid_y', 'grid_x'],
attrs={'standard_name':'longitude'})
ds['lat'] = xr.DataArray(hlat, dims=['grid_y', 'grid_x'],
attrs={'standard_name':'latitude'})
print("Finding grid position for flashes")
pixel_id_var = 'event_pixel_id'
ds_ev = assign_regular_bins(grid_ds, ds, event_coord_names,
pixel_id_var=pixel_id_var, append_indices=True)
# ds_flctr = assign_regular_bins(grid_ds, ds, flash_ctr_names,
# pixel_id_var='flash_ctr_pixel_id', append_indices=True)
# flctr_gb = ds.groupby('flash_ctr_pixel_id')
# ds_flini = assign_regular_bins(grid_ds, ds, flash_init_names,
# pixel_id_var='flash_init_pixel_id', append_indices=True)
# flini_gb = ds.groupby('flash_init_pixel_id')
# print('===== ev_gb')
# for event_pixel_id, dsegb in ev_gb:
# print(dsegb)
# break
# print('===== flctr_gb')
# for event_pixel_id, dsfgb in flctr_gb:
# print(dsfgb)
# break
print("Gridding data")
if latlon_grid:
grid_spatial_coords=['grid_time', None, 'grid_latitude', 'grid_longitude']
event_spatial_vars = ('event_altitude', 'event_latitude', 'event_longitude')
else:
grid_spatial_coords=['grid_time', None, 'grid_y', 'grid_x']
event_spatial_vars = ('event_altitude', 'event_y', 'event_x')
# print(ds_ev)
# print(grid_ds)
grid_ds = events_to_grid(ds_ev, grid_ds, min_points_per_flash=3,
pixel_id_var=pixel_id_var,
event_spatial_vars=event_spatial_vars,
grid_spatial_coords=grid_spatial_coords)
# Let's combine the flash and event data with the gridded data into one giant data structure.
both_ds = xr.combine_by_coords((grid_ds, ds))
print(both_ds)
Explanation: Above, we set latlon_grid = False; instead, let's use the 500 m stereographic grid defined by Stony Brook University's cell-tracking team. It's defined in lmalibtracer and was imported above. The definition is not too complicated - it only requires specifying the radious of a spherical earth and a coordinate center location in addition. All the other coordinate transformations are handled for us by pyproj.
The next block of code is a bit long, mainly beause we have repeated the blocks for defining the grid two ways. Note that we could uncomment a few lines and create a 3D grid, too. But it's useful to have flash extent density as a 2D instead of 3D analysis for quick-look visualizations, and it's also sufficient for most of our science.
End of explanation
time_idx = 42
fig, ax = plt.subplots(1,1)
both_ds.flash_extent_density[time_idx, :, :].plot.imshow(ax=ax)
print(both_ds.grid_time_edge[time_idx:time_idx+2].data)
Explanation: Looking at the gridded data
Once the cells above have been run, a new data structure (both_ds) will have the events, flashes, and their gridded versions. Note the new dimensions for the center and edges of the grid boxes (grid_time: 24, grid_time_edge: 25, grid_x: 250, grid_x_edge: 251, grid_y: 250, grid_y_edge: 251), and the new variables like flash_extent_density with dimensions (grid_time, grid_y, grid_x).
Let's plot flash extent density for the 42nd time step!
End of explanation
both_ds.dims['grid_time']
from ipywidgets import interact #, interactive, fixed, interact_manual
fig = plt.figure()
n_times = both_ds.dims['grid_time']
@interact(time_idx=(0, n_times-1))
def plot(time_idx=0):
fig.clear()
ax = fig.add_subplot(1,1,1)
both_ds.flash_extent_density[time_idx, :, :].plot.imshow(ax=ax, vmin=0, vmax=5)
Explanation: Add widget interactivity
It's a bit tedious to change the time index manually. Let'sInteract!. It's possible to build much nicer interfaces, but this is enough for a quick look.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
from matplotlib.colors import LogNorm
both_ds.flash_extent_density.sum('grid_time').plot.imshow(ax=ax, norm=LogNorm(1, 300))
Explanation: Aggregating in time
With these low flash rates, it's hard to get a sense of everything in the dataset, so let's sum across the time dimension. (xarray also has a rolling window function.)
End of explanation
if True:
print("Writing data")
duration_sec = (dttuple[1]-dttuple[0]).total_seconds()
if latlon_grid:
date_fmt = "LYLOUT_%y%m%d_%H%M%S_{0:04d}_grid.nc".format(int(duration_sec))
else:
date_fmt = "LYLOUT_%y%m%d_%H%M%S_{0:04d}_map{1:d}m.nc".format(
int(duration_sec), resolution_m)
outfile = dttuple[0].strftime(date_fmt)
comp = dict(zlib=True, complevel=5)
encoding = {var: comp for var in both_ds.data_vars}
both_ds.to_netcdf(outfile, encoding=encoding)
Explanation: Finally, write the data.
Once we save the data to disk, we can reload the data an re-run any of the plots above without reprocessing everything. We'll make files like this from the post-processed LMA data for each day during ESCAPE/TRACER, and they will be one of our data deliverables to the NCAR EOL catalog, in accordance with the ESCAPE data management plan as proposed in the grant.
End of explanation |
15,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejemplo de simulación numérica
Step1: Problema físico
Definimos un SR con el origen en el orificio donde el hilo atravieza el plano, la coordenada $\hat{z}$ apuntando hacia abajo. Con esto sacamos, de la segunda ley de Newton para las particulas
Step2: Todo muy lindo!!
Cómo podemos verificar si esto está andando ok igual? Porque hasta acá solo sabemos que dio razonable, pero el ojímetro no es una medida cuantitativa.
Una opción para ver que el algoritmo ande bien (y que no hay errores numéricos, y que elegimos un integrador apropiado ojo con esto eh... te estoy mirando a vos, Runge-Kutta), es ver si se conserva la energía.
Les recuerdo que la energía cinética del sistema es $K = \frac{1}{2} m_1 \left|\vec{v}_1 \right|^2 + \frac{1}{2} m_2 \left|\vec{v}_2 \right|^2$, cuidado con cómo se escribe cada velocidad, y que la energía potencial del sistema únicamente depende de la altura de la pelotita colgante.
Hace falta conocer la longitud $L$ de la cuerda para ver si se conserva la energía mecánica total? (Spoiler
Step3: Ven cómo los distintos métodos van modificando más y más la curva de $r(t)$ a medida que van pasando los pasos de integración. Tarea para ustedes es correr el mismo código con la conservación de energía.
Cuál es mejor, por qué y cómo saberlo son preguntas que deberán hacerse e investigar si en algún momento trabajan con esto.
Por ejemplo, pueden buscar en Wikipedia "Symplectic Integrator" y ver qué onda.
Les dejamos también abajo la simulación de la trayectoria de la pelotita
Step4: Recuerden que esta animación no va a parar eh, sabemos que verla te deja en una especie de trance místico, pero recuerden pararla cuando haya transcurrido suficiente tiempo
Animación Interactiva
Usando ipywidgets podemos agregar sliders a la animación, para modificar el valor de las masitas | Python Code:
import numpy as np
from scipy.integrate import odeint
from matplotlib import rc
import matplotlib.pyplot as plt
%matplotlib inline
rc("text", usetex=True)
rc("font", size=18)
rc("figure", figsize=(6,4))
rc("axes", grid=True)
Explanation: Ejemplo de simulación numérica
End of explanation
# Constantes del problema:
M1 = 3
M2 = 3
g = 9.81
# Condiciones iniciales del problema:
r0 = 2
r_punto0 = 0
tita0 = 0
tita_punto0 = 1
C1 = (M2*g)/(M1+M2) # Defino constantes utiles
C2 = (M1)/(M1+M2)
cond_iniciales = [r0, r_punto0, tita0, tita_punto0]
def derivada(X, t, c1, c2): # esto sería la f del caso { x' = f(x,t) }
r, r_punto, tita, tita_punto = X
deriv = [0, 0, 0, 0] # es como el vector columna de arriba pero en filado
deriv[0] = r_punto # derivada de r
deriv[1] = -c1 + c2*r*(tita_punto)**2 # r dos puntos
deriv[2] = tita_punto # derivada de tita
deriv[3] = -2*r_punto*tita_punto/r
return deriv
def resuelvo_sistema(m1, m2, tmax = 20):
t0 = 0
c1 = (m2*g)/(m1+m2) # Defino constantes utiles
c2 = (m1)/(m1+m2)
t = np.arange(t0, tmax, 0.001)
# aca podemos definirnos nuestro propio algoritmo de integracion
# o bien usar el que viene a armado de scipy.
# Ojo que no es perfecto eh, a veces es mejor escribirlo uno
out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))
return [t, out.T]
t, (r, rp, tita, titap) = resuelvo_sistema(M1, M2, tmax=10)
plt.figure()
plt.plot(t, r/r0, 'r')
plt.ylabel(r"$r / r_0$")
plt.xlabel(r"tiempo")
# plt.savefig("directorio/r_vs_t.pdf", dpi=300)
plt.figure()
plt.plot(t, tita-tita0, 'b')
plt.ylabel(r"$\theta - \theta_0$")
plt.xlabel(r"tiempo")
# plt.savefig("directorio/tita_vs_t.pdf", dpi=300)
plt.figure()
plt.plot(r*np.cos(tita-tita0)/r0, r*np.sin(tita-tita0)/r0, 'g')
plt.ylabel(r"$r/r_0\ \sin\left(\theta - \theta_0\right)$")
plt.xlabel(r"$r/r_0\ \cos\left(\theta - \theta_0\right)$")
# plt.savefig("directorio/trayectoria.pdf", dpi=300)
Explanation: Problema físico
Definimos un SR con el origen en el orificio donde el hilo atravieza el plano, la coordenada $\hat{z}$ apuntando hacia abajo. Con esto sacamos, de la segunda ley de Newton para las particulas:
$$
\begin{align}
\text{Masa 1)}\quad&\vec{F}_1 = m_1 \vec{a}_1 \
&-T \hat{r} = m_1 \vec{a}_1 \
&-T \hat{r} = m_1 \left{ \left(\ddot{r} - r \dot{\theta}^2\right) \hat{r} + \left(r\ddot{\theta} + 2\dot{r}\dot{\theta}\right)\hat{\theta} \right} \
&\begin{cases}
\hat{r})\ - T = m_1\left( \ddot{r} - r\, \dot{\theta}^2\right)\
\hat{\theta})\ 0 = m_1 \left(r \ddot{\theta} + 2 \dot{r}\dot{\theta}\right)\
\end{cases}\
\
\text{Masa 2)}\quad&\vec{F}_2 = m_2 \vec{a}_2 \
&-T \hat{z} + m_2 g \hat{z} = m_2 \ddot{z} \hat{z} \
\implies & \boxed{T = m_2 \left( g - \ddot{z} \right)}\
\end{align}
$$
Ahora reemplazando este resultado para la tension (que es igual en ambas expresiones) y entendiendo que $\ddot{z} = -\ddot{r}$ pues la soga es ideal y de largo constante, podemos rescribir las ecuaciones obtenidas para la masa 1 como:
$$
\begin{cases}
\hat{r})\quad - m_2 \left( g + \ddot{r} \right) = m_1\left( \ddot{r} - r\, \dot{\theta}^2\right)\
\
\hat{\theta})\quad 0 = m_1 \left(r \ddot{\theta} + 2 \dot{r}\dot{\theta}\right)
\end{cases}
\implies
\begin{cases}
\hat{r})\quad \ddot{r} = \dfrac{- m_2 g + m_1 r \dot{\theta}^2}{m_1 + m_2}\
\
\hat{\theta})\quad \ddot{\theta} = -2 \dfrac{\dot{r}\dot{\theta}}{r}\
\end{cases}
$$
La gracia de estos métodos es lograr encontrar una expresión de la forma $y'(x) = f(x,t)$ donde x será la solución buscada, aca como estamos en un sistema de segundo orden en dos variables diferentes ($r$ y $\theta$) sabemos que nuestra solución va a tener que involucrar 4 componentes. Es como en el oscilador armónico, que uno tiene que definir posicion y velocidad inicial para poder conocer el sistema, solo que aca tenemos dos para $r$ y dos para $\theta$.
Se puede ver entonces que vamos a necesitar una solucion del tipo:
$$\mathbf{X} = \begin{pmatrix} r \ \dot{r}\ \theta \ \dot{\theta} \end{pmatrix} $$
Y entonces
$$
\dot{\mathbf{X}} =
\begin{pmatrix} \dot{r} \ \ddot{r}\ \dot{\theta} \ \ddot{\theta} \end{pmatrix} =
\begin{pmatrix} \dot{r} \ \dfrac{-m_2 g + m_1 r \dot{\theta}^2}{m_1 + m_2} \ \dot{\theta} \ -2 \dfrac{\dot{r}\dot{\theta}}{r} \end{pmatrix} =
\mathbf{f}(\mathbf{X}, t)
$$
Si alguno quiere, tambien se puede escribir la evolucion del sistema de una forma piola, que no es otra cosa que una querida expansión de Taylor a orden lineal.
$$
\begin{align}
r(t+dt) &= r(t) + \dot{r}(t)\cdot dt \
\dot{r}(t+dt) &= \dot{r}(t) + \ddot{r}(t)\cdot dt \
\theta(t+dt) &= \theta(t) + \dot{\theta}(t)\cdot dt \
\dot{\theta}(t+dt) &= \dot{\theta}(t) + \ddot{\theta}(t)\cdot dt
\end{align}
\implies
\begin{pmatrix}
r\
\dot{r}\
\theta\
\ddot{\theta}
\end{pmatrix}(t + dt) =
\begin{pmatrix}
r\
\dot{r}\
\theta\
\ddot{\theta}
\end{pmatrix}(t) +
\begin{pmatrix}
\dot{r}\
\ddot{r}\
\dot{\theta}\
\ddot{\theta}
\end{pmatrix}(t) \cdot dt
$$
Aca tenemos que recordar que la compu no puede hacer cosas continuas, porque son infinitas cuentas, entones si o si hay que discretizar el tiempo y el paso temporal!
$$
\begin{pmatrix}
r\
\dot{r}\
\theta\
\ddot{\theta}
\end{pmatrix}_{i+1} =
\begin{pmatrix}
r\
\dot{r}\
\theta\
\ddot{\theta}
\end{pmatrix}_i +
\begin{pmatrix}
\dot{r}\
\ddot{r}\
\dot{\theta}\
\ddot{\theta}
\end{pmatrix}_i \cdot dt
$$
Si entonces decido llamar a este vector columna $\mathbf{X}$, el sistema queda escrito como:
$$
\mathbf{X}_{i+1} = \mathbf{X}_i + \dot{\mathbf{X}}_i\ dt
$$
Donde sale denuevo que $\dot{\mathbf{X}}$ es lo que está escrito arriba.
Es decir que para encontrar cualquier valor, solo hace falta saber el vector anterior y la derivada, pero las derivadas ya las tenemos (es todo el trabajo que hicimos de fisica antes)!!
---
De cualquier forma que lo piensen, ojala hayan entendido que entonces con tener las condiciones iniciales y las ecuaciones diferenciales ya podemos resolver (tambien llamado integrar) el sistema.
End of explanation
from scipy.integrate import solve_ivp
def resuelvo_sistema(m1, m2, tmax = 20, metodo='RK45'):
t0 = 0
c1 = (m2*g)/(m1+m2) # Defino constantes utiles
c2 = (m1)/(m1+m2)
t = np.arange(t0, tmax, 0.001)
# acá hago uso de las lambda functions, solamente para usar
# la misma funcion que definimos antes. Pero como ahora
# voy a usar otra funcion de integracion (no odeint)
# que pide otra forma de definir la funcion, en vez de pedir
# f(x,t) esta te pide f(t, x), entonces nada, hay que dar vuelta
# parametros y nada mas...
deriv_bis = lambda t, x: derivada(x, t, c1, c2)
out = solve_ivp(fun=deriv_bis, t_span=(t0, tmax), y0=cond_iniciales,\
method=metodo, t_eval=t)
return out
# Aca armo dos arrays con los metodos posibles y otro con colores
all_metodos = ['RK45', 'RK23', 'Radau', 'BDF', 'LSODA']
all_colores = ['r', 'b', 'm', 'g', 'c']
# Aca les dejo la forma piola de loopear sobre dos arrays a la par
for met, col in zip(all_metodos, all_colores):
result = resuelvo_sistema(M1, M2, tmax=30, metodo=met)
t = result.t
r, rp, tita, titap = result.y
plt.plot(t, r/r0, col, label=met)
plt.xlabel("tiempo")
plt.ylabel(r"$r / r_0$")
plt.legend(loc=3)
Explanation: Todo muy lindo!!
Cómo podemos verificar si esto está andando ok igual? Porque hasta acá solo sabemos que dio razonable, pero el ojímetro no es una medida cuantitativa.
Una opción para ver que el algoritmo ande bien (y que no hay errores numéricos, y que elegimos un integrador apropiado ojo con esto eh... te estoy mirando a vos, Runge-Kutta), es ver si se conserva la energía.
Les recuerdo que la energía cinética del sistema es $K = \frac{1}{2} m_1 \left|\vec{v}_1 \right|^2 + \frac{1}{2} m_2 \left|\vec{v}_2 \right|^2$, cuidado con cómo se escribe cada velocidad, y que la energía potencial del sistema únicamente depende de la altura de la pelotita colgante.
Hace falta conocer la longitud $L$ de la cuerda para ver si se conserva la energía mecánica total? (Spoiler: No. Pero piensen por qué)
Les queda como ejercicio a ustedes verificar eso, y también pueden experimentar con distintos metodos de integración a ver qué pasa con cada uno, abajo les dejamos una ayudita para que prueben.
End of explanation
from matplotlib import animation
%matplotlib notebook
result = resuelvo_sistema(M1, M2, tmax=30, metodo='Radau')
t = result.t
r, rp, tita, titap = result.y
fig, ax = plt.subplots()
ax.set_xlim([-1, 1])
ax.set_ylim([-1, 1])
ax.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0, 'm', lw=0.2)
line, = ax.plot([], [], 'ko', ms=5)
N_SKIP = 50
N_FRAMES = int(len(r)/N_SKIP)
def animate(frame_no):
i = frame_no*N_SKIP
r_i = r[i]/r0
tita_i = tita[i]
line.set_data(r_i*np.cos(tita_i), r_i*np.sin(tita_i))
return line,
anim = animation.FuncAnimation(fig, animate, frames=N_FRAMES,
interval=50, blit=False)
Explanation: Ven cómo los distintos métodos van modificando más y más la curva de $r(t)$ a medida que van pasando los pasos de integración. Tarea para ustedes es correr el mismo código con la conservación de energía.
Cuál es mejor, por qué y cómo saberlo son preguntas que deberán hacerse e investigar si en algún momento trabajan con esto.
Por ejemplo, pueden buscar en Wikipedia "Symplectic Integrator" y ver qué onda.
Les dejamos también abajo la simulación de la trayectoria de la pelotita
End of explanation
from ipywidgets import interactive, interact, FloatProgress
from IPython.display import clear_output, display
%matplotlib inline
@interact(m1=(0,5,0.5), m2=(0,5,0.5), tmax=(0.01,20,0.5)) #Permite cambiar el parámetro de la ecuación
def resuelvo_sistema(m1, m2, tmax = 20):
t0 = 0
c1 = (m2*g)/(m1+m2) # Defino constantes utiles
c2 = (m1)/(m1+m2)
t = np.arange(t0, tmax, 0.05)
# out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))
r, rp, tita, titap = odeint(derivada, cond_iniciales, t, args=(c1, c2,)).T
plt.xlim((-1,1))
plt.ylim((-1,1))
plt.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0,'b-')
# plt.xlabel("tiempo")
# plt.ylabel(r"$r / r_0$")
# plt.show()
Explanation: Recuerden que esta animación no va a parar eh, sabemos que verla te deja en una especie de trance místico, pero recuerden pararla cuando haya transcurrido suficiente tiempo
Animación Interactiva
Usando ipywidgets podemos agregar sliders a la animación, para modificar el valor de las masitas
End of explanation |
15,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
21장 네트워크 분석
많은 데이터 문제는 노드(node)와 그 사이를 연결하는 엣지(edge)로 구성된 네트워크(network)의 관점에서 볼 수 있다.
예를들어, 페이스북에서는 사용자가 노드라면 그들의 친구 관계는 엣지가 된다.
웹에서는 각 웹페이지가 노드이고 페이지 사이를 연결하는 하이퍼링크가 엣지가 된다.
페이스북의 친구 관계는 상호적이다.
내가 당신과 친구라면 당신은 반드시 나와 친구이다.
즉, 이런 경우를 엣지에 방향이 없다(undirected)고 한다.
반면 하이퍼링크는 그렇지 않다.
내 홈페이지에는 대한민국 국회 홈페이지에 대한 링크가 있어도,
반대로 대한민국 국회 홈페이지에는 내 홈페이지에 대한 링크가 없을 수 있다.
이런 네트워크에는 방향이 있기 때문에 방향성 네트워크(directed network)라고 한다.
21.1 매개 중심성
1장에서 우리는 데이텀 네트워크에서 친구의 수를 셈으로써 중심이 되는 주요 핵심 인물을 찾았다.
여기서는 몇 가지 추가적인 접근법을 살펴보자.
Step1: 네트워크는 사용자와 친구 관계를 나타낸다.
Step2: 친구 목록을 각 사용자의 dict에 추가하기도 했다.
Step3: 1장에서 연결 중심성(degree centrality)을 살펴볼 때는, 우리가 직관적으로 생각했던 주요 연결고리들이 선정되지 않아 약간 아쉬웠다.
대안으로 사용할 수 있는 지수 중 하나는 매개 중심성(betweenness centrality)인데, 이는 두 사람 사이의 최단 경로상에 빈번하게 등장하는 사람들이 큰 값을 가지는 지수이다.
구체적으로는, 노드 $i$의 매개 중심성은 다른 모든 노드 $j,k$ 쌍의 최단 경로 중에, $i$를 거치는 경로의 비율로 계산한다.
임의의 두 사람이 주어졌을 때 그들 간의 최단 경로를 구해야 한다.
이 책에서는 덜 효율적이더라도 훨씬 이해하기 쉬운 'Breadth-first search'라고도 알려진 알고리즘을 사용한다.
Step4: 그리고 각 노드에 대해 생성된 dict들을 저장하자.
Step5: 그러면 이제 매개 중심성을 구할 준비가 다 되었다.
이제 각각의 최단 경로에 포함되는 각 노드의 매개 중심성에 $1/n$을 더해 주자.
Step7: 사용자 0과 9의 최단 경로 사이에는 다른 사용자가 없으므로 매개 중심성이 0이다.
반면 사용자 3, 4, 5는 최단 경로상에 무척 빈번하게 위치하기 때문에 높은 매개 중심성을 가진다.
대게 중심성의 절댓값 자체는 큰 의미를 가지지 않고, 상대값만이 의미를 가진다.
그 외에 살펴볼 수 있는 중심성 지표 중 하나는 근접 중심성(closeness centrality)이다.
먼저 각 사용자의 원접성(farness)을 계산한다. 원접성이란 from_user와 다른 모든 사용자의 최단 경로를 합한 값이다.
Step8: 이제 근접 중심성은 간단히 계산할 수 있다.
Step11: 계산된 근접 중심성의 편차는 더욱 작다. 네트워크 중심에 있는 노드조차 외곽에 위치한 노드들로부터 멀리 떨어져 있기 때문이다.
여기서 봤듯이 최단 경로를 계산하는 것은 꽤나 복잡하다. 그렇기 때문에 큰 네트워크에서는 근접 중심성을 자주 사용하지 않는다.
덜 직관적이지만 보통 더 쉽게 계산할 수 있는 고유벡터 중심성(eigenvector centrality)을 더 자주 사용한다.
21.2 고유벡터 중심성
고유벡터 중심성에 대해 알아보기 전에 먼저 고유벡터가 무엇인지 살펴봐야 하고, 고유벡터가 무엇인지 알기 위해서는 먼저 행렬 연산에 대해 알아봐야 한다.
21.2.1 행렬 연산
Step12: 행렬 A의 고유 벡터를 찾기 위해, 임의의 벡터 $v$를 골라 matrix_operate를 수행하고, 결과값의 크기가 1이 되게 재조정하는 과정을 반복 수행한다.
Step13: 결과값으로 반환되는 guess를 matrix_operate를 통해 결과값의 크기가 1인 벡터로 재조정하면, 자기 자신이 반환된다. 즉, 여기서 guess는 고유벡터라는 것을 의미한다.
모든 실수 행렬에 고유벡터와 고유값이 있는 것은 아니다. 예를 들어 시계 방향으로 90도 회전하는 연산을 하는 다음 행렬에는 곱했을 때 가지 자신이 되는 벡터는 영벡터밖에 없다.
Step14: 이 행렬로 앞서 구현한 find_eignevector(rotate)를 수행하면, 영원히 끝나지 않을 것이다.
한편, 고유벡터가 있는 행렬도 때로는 무한루프에 빠질 수 있다.
Step15: 이 행렬은 모든 벡터 [x, y]를 [y, x]로 변환한다. 따라서 [1, 1]은 고유값이 1인 고유벡터가 된다.
하지만 x, y값이 다른 임의의 벡터에서 출발해서 find_eigenvector를 수행하면 x, y값을 바꾸는 연산만 무한히 수행할 것이다.
(NumPy같은 라이브러리에는 이런 케이스까지 다룰 수 있는 다양한 방법들이 구현되어 있다.)
이런 사소한 문제에도 불구하고, 어쨌든 find_eigenvector가 결과값을 반환한다면, 그 결과값은 곧 고유벡터이다.
21.2.2 중심성
고유벡터가 데이터 네트워크를 이해하는데 어떻게 도움을 줄까?
얘기를 하기 전에 먼저 네트워크를 인접행렬(adjacency matrix)의 형태로 나타내 보자. 이 행렬은 사용자 i와 사용자 j가 친구인 경우 (i, j)번째 항목에 1이 있고, 친구가 아닌 경우 0이 있는 행렬이다.
Step16: 각 사용자의 고유벡터 중심성이란 find_eigenvector로 찾은 사용자의 고유벡터가 된다.
Step17: 연결의 수가 많고, 중심성이 높은 사용자들한테 연결된 사용자들은 고유벡터 중심성이 높다.
앞의 결과에 따르면 사용자 1, 사용자 2의 중심성이 가장 높은데, 이는 중심성이 높은 사람들과 세번이나 연결되었기 때문이다.
이들로부터 멀어질수록 사용자들의 중심성은 점차 줄어든다.
21.3 방향성 그래프(Directed graphs)와 페이지랭크
데이텀이 인기를 별로 끌지 못하자, 순이익 팀의 부사장은 친구 모델에서 보증(endorsement)모델로 전향하는 것을 고려 중이다.
알고 보니 사람들은 어떤 데이터 과학자들끼리 친구인지에 대해서는 별로 관심이 없었지만, 헤드헌터들은 다른 데이터 과학자로부터 존경 받는 데이터 과학자가 누구인지에 대해 관심이 많다.
이 새로운 모델에서 관계는 상호적인 것이 아니라, 한 사람(source)이 다른 멋진 한 사람(target)의 실력에 보증을 서주는 (source, target) 쌍으로 비대칭적인 관계를 표현하게 된다.
Step18: 그리고 가장 보증을 많이 받은 데이터 과학자들의 데이터를 수집해서, 그것을 헤드헌터들한테 팔면 된다.
Step19: 사실 '보증의 수'와 같은 숫자는 조작하기가 매우 쉽다.
가장 간단한 방법 중 하나는, 가짜 계정을 여러 개 만들어서 그것들로 내 계정에 대한 보증을 서는 것이다.
또 다른 방법은, 친구들끼리 짜고 서로가 서로를 보증해 주는 것이다. (아마 사용자 0, 1, 2가 이런 관계일 가능성이 크다.)
좀 더 나은 지수는, '누가' 보증을 서는지를 고려하는 것이다.
보증을 많이 받은 사용자가 보증을 설 때는, 보증을 적게 받은 사용자가 보증을 설 때보다 더 중요한 것으로 받아들여지는 것이 타당하다.
그리고 사실 이것은 유명한 페이지랭크(PageRank) 알고리즘의 기본 철학이기도 하다.
1. 네트워크 전체에는 1.0(또는 100%)의 페이지랭크가 있다.
2. 초기에 이 페이지랭크를 모든 노드에 고르게 배당한다.
3. 각 스텝을 거칠 때마다 각 노드에 배당된 페이지랭크의 대부분은 외부로 향하는 링크에 균등하게 배당한다.
4. 각 스텝을 거칠 때마다 각 노드에 남아 있는 페이지랭크를 모든 노드에 고르게 배당한다. | Python Code:
from __future__ import division
import math, random, re
from collections import defaultdict, Counter, deque
from linear_algebra import dot, get_row, get_column, make_matrix, magnitude, scalar_multiply, shape, distance
from functools import partial
users = [
{ "id": 0, "name": "Hero" },
{ "id": 1, "name": "Dunn" },
{ "id": 2, "name": "Sue" },
{ "id": 3, "name": "Chi" },
{ "id": 4, "name": "Thor" },
{ "id": 5, "name": "Clive" },
{ "id": 6, "name": "Hicks" },
{ "id": 7, "name": "Devin" },
{ "id": 8, "name": "Kate" },
{ "id": 9, "name": "Klein" }
]
Explanation: 21장 네트워크 분석
많은 데이터 문제는 노드(node)와 그 사이를 연결하는 엣지(edge)로 구성된 네트워크(network)의 관점에서 볼 수 있다.
예를들어, 페이스북에서는 사용자가 노드라면 그들의 친구 관계는 엣지가 된다.
웹에서는 각 웹페이지가 노드이고 페이지 사이를 연결하는 하이퍼링크가 엣지가 된다.
페이스북의 친구 관계는 상호적이다.
내가 당신과 친구라면 당신은 반드시 나와 친구이다.
즉, 이런 경우를 엣지에 방향이 없다(undirected)고 한다.
반면 하이퍼링크는 그렇지 않다.
내 홈페이지에는 대한민국 국회 홈페이지에 대한 링크가 있어도,
반대로 대한민국 국회 홈페이지에는 내 홈페이지에 대한 링크가 없을 수 있다.
이런 네트워크에는 방향이 있기 때문에 방향성 네트워크(directed network)라고 한다.
21.1 매개 중심성
1장에서 우리는 데이텀 네트워크에서 친구의 수를 셈으로써 중심이 되는 주요 핵심 인물을 찾았다.
여기서는 몇 가지 추가적인 접근법을 살펴보자.
End of explanation
friendships = [(0, 1), (0, 2), (1, 2), (1, 3), (2, 3), (3, 4),
(4, 5), (5, 6), (5, 7), (6, 8), (7, 8), (8, 9)]
Explanation: 네트워크는 사용자와 친구 관계를 나타낸다.
End of explanation
# give each user a friends list
for user in users:
user["friends"] = []
# and populate it
for i, j in friendships:
# this works because users[i] is the user whose id is i
users[i]["friends"].append(users[j]) # add i as a friend of j
users[j]["friends"].append(users[i]) # add j as a friend of i
Explanation: 친구 목록을 각 사용자의 dict에 추가하기도 했다.
End of explanation
#
# Betweenness Centrality
#
def shortest_paths_from(from_user):
# 특정 사용자로부터 다른 사용자까지의 모든 최단 경로를 포함하는 dict
shortest_paths_to = { from_user["id"] : [[]] }
# 확인해야 하는 (이전 사용자, 다음 사용자) 큐
# 모든 (from_user, from_user의 친구) 쌍으로 시작
frontier = deque((from_user, friend)
for friend in from_user["friends"])
# 큐가 빌 때까지 반복
while frontier:
prev_user, user = frontier.popleft() # 큐의 첫 번째 사용자를
user_id = user["id"] # 제거
# 큐에 사용자를 추가하는 방법을 고려해 보면
# prev_user까지의 최단 경로를 이미 알고 있을 수도 있다.
paths_to_prev = shortest_paths_to[prev_user["id"]]
paths_via_prev = [path + [user_id] for path in paths_to_prev]
# 만약 최단 경로를 이미 알고 있다면
old_paths_to_here = shortest_paths_to.get(user_id, [])
# 지금까지의 최단 경로는 무엇일까?
if old_paths_to_here:
min_path_length = len(old_paths_to_here[0])
else:
min_path_length = float('inf')
# 길지 않은 새로운 경로만 저장
new_paths_to_here = [path_via_prev
for path_via_prev in paths_via_prev
if len(path_via_prev) <= min_path_length
and path_via_prev not in old_paths_to_here]
shortest_paths_to[user_id] = old_paths_to_here + new_paths_to_here
# 아직 한번도 보지 못한 이웃을 frontier에 추가
frontier.extend((user, friend)
for friend in user["friends"]
if friend["id"] not in shortest_paths_to)
return shortest_paths_to
Explanation: 1장에서 연결 중심성(degree centrality)을 살펴볼 때는, 우리가 직관적으로 생각했던 주요 연결고리들이 선정되지 않아 약간 아쉬웠다.
대안으로 사용할 수 있는 지수 중 하나는 매개 중심성(betweenness centrality)인데, 이는 두 사람 사이의 최단 경로상에 빈번하게 등장하는 사람들이 큰 값을 가지는 지수이다.
구체적으로는, 노드 $i$의 매개 중심성은 다른 모든 노드 $j,k$ 쌍의 최단 경로 중에, $i$를 거치는 경로의 비율로 계산한다.
임의의 두 사람이 주어졌을 때 그들 간의 최단 경로를 구해야 한다.
이 책에서는 덜 효율적이더라도 훨씬 이해하기 쉬운 'Breadth-first search'라고도 알려진 알고리즘을 사용한다.
End of explanation
for user in users:
user["shortest_paths"] = shortest_paths_from(user)
Explanation: 그리고 각 노드에 대해 생성된 dict들을 저장하자.
End of explanation
for user in users:
user["betweenness_centrality"] = 0.0
for source in users:
source_id = source["id"]
for target_id, paths in source["shortest_paths"].items(): # python2에서는 items 대신 iteritems 사용
if source_id < target_id: # 잘못해서 두 번 세지 않도록 주의하자
num_paths = len(paths) # 최단 경로가 몇 개 존재하는가?
contrib = 1 / num_paths # 중심성에 기여하는 값
for path in paths:
for id in path:
if id not in [source_id, target_id]:
users[id]["betweenness_centrality"] += contrib
for user in users:
print(user["id"], user["betweenness_centrality"])
Explanation: 그러면 이제 매개 중심성을 구할 준비가 다 되었다.
이제 각각의 최단 경로에 포함되는 각 노드의 매개 중심성에 $1/n$을 더해 주자.
End of explanation
#
# closeness centrality
#
def farness(user):
모든 사용자와의 최단 거리 합
return sum(len(paths[0])
for paths in user["shortest_paths"].values())
Explanation: 사용자 0과 9의 최단 경로 사이에는 다른 사용자가 없으므로 매개 중심성이 0이다.
반면 사용자 3, 4, 5는 최단 경로상에 무척 빈번하게 위치하기 때문에 높은 매개 중심성을 가진다.
대게 중심성의 절댓값 자체는 큰 의미를 가지지 않고, 상대값만이 의미를 가진다.
그 외에 살펴볼 수 있는 중심성 지표 중 하나는 근접 중심성(closeness centrality)이다.
먼저 각 사용자의 원접성(farness)을 계산한다. 원접성이란 from_user와 다른 모든 사용자의 최단 경로를 합한 값이다.
End of explanation
for user in users:
user["closeness_centrality"] = 1 / farness(user)
for user in users:
print(user["id"], user["closeness_centrality"])
Explanation: 이제 근접 중심성은 간단히 계산할 수 있다.
End of explanation
def matrix_product_entry(A, B, i, j):
return dot(get_row(A, i), get_column(B, j))
def matrix_multiply(A, B):
n1, k1 = shape(A)
n2, k2 = shape(B)
if k1 != n2:
raise ArithmeticError("incompatible shapes!")
return make_matrix(n1, k2, partial(matrix_product_entry, A, B))
def vector_as_matrix(v):
(list 형태의) 벡터 v를 n x 1 행렬로 변환
return [[v_i] for v_i in v]
def vector_from_matrix(v_as_matrix):
n x 1 행렬을 리스트로 변환
return [row[0] for row in v_as_matrix]
def matrix_operate(A, v):
v_as_matrix = vector_as_matrix(v)
product = matrix_multiply(A, v_as_matrix)
return vector_from_matrix(product)
Explanation: 계산된 근접 중심성의 편차는 더욱 작다. 네트워크 중심에 있는 노드조차 외곽에 위치한 노드들로부터 멀리 떨어져 있기 때문이다.
여기서 봤듯이 최단 경로를 계산하는 것은 꽤나 복잡하다. 그렇기 때문에 큰 네트워크에서는 근접 중심성을 자주 사용하지 않는다.
덜 직관적이지만 보통 더 쉽게 계산할 수 있는 고유벡터 중심성(eigenvector centrality)을 더 자주 사용한다.
21.2 고유벡터 중심성
고유벡터 중심성에 대해 알아보기 전에 먼저 고유벡터가 무엇인지 살펴봐야 하고, 고유벡터가 무엇인지 알기 위해서는 먼저 행렬 연산에 대해 알아봐야 한다.
21.2.1 행렬 연산
End of explanation
def find_eigenvector(A, tolerance=0.00001):
guess = [1 for __ in A]
while True:
result = matrix_operate(A, guess)
length = magnitude(result)
next_guess = scalar_multiply(1/length, result)
if distance(guess, next_guess) < tolerance:
return next_guess, length # eigenvector, eigenvalue
guess = next_guess
Explanation: 행렬 A의 고유 벡터를 찾기 위해, 임의의 벡터 $v$를 골라 matrix_operate를 수행하고, 결과값의 크기가 1이 되게 재조정하는 과정을 반복 수행한다.
End of explanation
rotate = [[0, 1],
[-1, 0]]
Explanation: 결과값으로 반환되는 guess를 matrix_operate를 통해 결과값의 크기가 1인 벡터로 재조정하면, 자기 자신이 반환된다. 즉, 여기서 guess는 고유벡터라는 것을 의미한다.
모든 실수 행렬에 고유벡터와 고유값이 있는 것은 아니다. 예를 들어 시계 방향으로 90도 회전하는 연산을 하는 다음 행렬에는 곱했을 때 가지 자신이 되는 벡터는 영벡터밖에 없다.
End of explanation
flip = [[0, 1],
[1, 0]]
Explanation: 이 행렬로 앞서 구현한 find_eignevector(rotate)를 수행하면, 영원히 끝나지 않을 것이다.
한편, 고유벡터가 있는 행렬도 때로는 무한루프에 빠질 수 있다.
End of explanation
#
# eigenvector centrality
#
def entry_fn(i, j):
return 1 if (i, j) in friendships or (j, i) in friendships else 0
n = len(users)
adjacency_matrix = make_matrix(n, n, entry_fn)
adjacency_matrix
Explanation: 이 행렬은 모든 벡터 [x, y]를 [y, x]로 변환한다. 따라서 [1, 1]은 고유값이 1인 고유벡터가 된다.
하지만 x, y값이 다른 임의의 벡터에서 출발해서 find_eigenvector를 수행하면 x, y값을 바꾸는 연산만 무한히 수행할 것이다.
(NumPy같은 라이브러리에는 이런 케이스까지 다룰 수 있는 다양한 방법들이 구현되어 있다.)
이런 사소한 문제에도 불구하고, 어쨌든 find_eigenvector가 결과값을 반환한다면, 그 결과값은 곧 고유벡터이다.
21.2.2 중심성
고유벡터가 데이터 네트워크를 이해하는데 어떻게 도움을 줄까?
얘기를 하기 전에 먼저 네트워크를 인접행렬(adjacency matrix)의 형태로 나타내 보자. 이 행렬은 사용자 i와 사용자 j가 친구인 경우 (i, j)번째 항목에 1이 있고, 친구가 아닌 경우 0이 있는 행렬이다.
End of explanation
eigenvector_centralities, _ = find_eigenvector(adjacency_matrix)
for user_id, centrality in enumerate(eigenvector_centralities):
print(user_id, centrality)
Explanation: 각 사용자의 고유벡터 중심성이란 find_eigenvector로 찾은 사용자의 고유벡터가 된다.
End of explanation
#
# directed graphs
#
endorsements = [(0, 1), (1, 0), (0, 2), (2, 0), (1, 2), (2, 1), (1, 3),
(2, 3), (3, 4), (5, 4), (5, 6), (7, 5), (6, 8), (8, 7), (8, 9)]
for user in users:
user["endorses"] = [] # add one list to track outgoing endorsements
user["endorsed_by"] = [] # and another to track endorsements
for source_id, target_id in endorsements:
users[source_id]["endorses"].append(users[target_id])
users[target_id]["endorsed_by"].append(users[source_id])
Explanation: 연결의 수가 많고, 중심성이 높은 사용자들한테 연결된 사용자들은 고유벡터 중심성이 높다.
앞의 결과에 따르면 사용자 1, 사용자 2의 중심성이 가장 높은데, 이는 중심성이 높은 사람들과 세번이나 연결되었기 때문이다.
이들로부터 멀어질수록 사용자들의 중심성은 점차 줄어든다.
21.3 방향성 그래프(Directed graphs)와 페이지랭크
데이텀이 인기를 별로 끌지 못하자, 순이익 팀의 부사장은 친구 모델에서 보증(endorsement)모델로 전향하는 것을 고려 중이다.
알고 보니 사람들은 어떤 데이터 과학자들끼리 친구인지에 대해서는 별로 관심이 없었지만, 헤드헌터들은 다른 데이터 과학자로부터 존경 받는 데이터 과학자가 누구인지에 대해 관심이 많다.
이 새로운 모델에서 관계는 상호적인 것이 아니라, 한 사람(source)이 다른 멋진 한 사람(target)의 실력에 보증을 서주는 (source, target) 쌍으로 비대칭적인 관계를 표현하게 된다.
End of explanation
endorsements_by_id = [(user["id"], len(user["endorsed_by"]))
for user in users]
sorted(endorsements_by_id,
key=lambda x: x[1], # (user_id, num_endorsements)
reverse=True)
Explanation: 그리고 가장 보증을 많이 받은 데이터 과학자들의 데이터를 수집해서, 그것을 헤드헌터들한테 팔면 된다.
End of explanation
def page_rank(users, damping = 0.85, num_iters = 100):
# 먼저 페이지랭크를 모든 노드에 고르게 배당
num_users = len(users)
pr = { user["id"] : 1 / num_users for user in users }
# 매 스텝마다 각 노드가 받는
# 적은 양의 페이지랭크
base_pr = (1 - damping) / num_users
for __ in range(num_iters):
next_pr = { user["id"] : base_pr for user in users }
for user in users:
# 페이지랭크를 외부로 향하는 링크에 배당한다.
links_pr = pr[user["id"]] * damping
for endorsee in user["endorses"]:
next_pr[endorsee["id"]] += links_pr / len(user["endorses"])
pr = next_pr
return pr
for user_id, pr in page_rank(users).items():
print(user_id, pr)
Explanation: 사실 '보증의 수'와 같은 숫자는 조작하기가 매우 쉽다.
가장 간단한 방법 중 하나는, 가짜 계정을 여러 개 만들어서 그것들로 내 계정에 대한 보증을 서는 것이다.
또 다른 방법은, 친구들끼리 짜고 서로가 서로를 보증해 주는 것이다. (아마 사용자 0, 1, 2가 이런 관계일 가능성이 크다.)
좀 더 나은 지수는, '누가' 보증을 서는지를 고려하는 것이다.
보증을 많이 받은 사용자가 보증을 설 때는, 보증을 적게 받은 사용자가 보증을 설 때보다 더 중요한 것으로 받아들여지는 것이 타당하다.
그리고 사실 이것은 유명한 페이지랭크(PageRank) 알고리즘의 기본 철학이기도 하다.
1. 네트워크 전체에는 1.0(또는 100%)의 페이지랭크가 있다.
2. 초기에 이 페이지랭크를 모든 노드에 고르게 배당한다.
3. 각 스텝을 거칠 때마다 각 노드에 배당된 페이지랭크의 대부분은 외부로 향하는 링크에 균등하게 배당한다.
4. 각 스텝을 거칠 때마다 각 노드에 남아 있는 페이지랭크를 모든 노드에 고르게 배당한다.
End of explanation |
15,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scikits-learn is a premier machine learning library for python, with a very easy to use API and great documentation.
Step1: Lets load up our trajectory. This is the trajectory that we generated in
the "Running a simulation in OpenMM and analyzing the results with mdtraj"
example.
Step2: Create a two component PCA model, and project our data down into this
reduced dimensional space. Using just the cartesian coordinates as
input to PCA, it's important to start with some kind of alignment.
Step3: Now we can plot the data on this projection.
Step4: Lets try cross-checking our result by using a different feature space that isn't sensitive to alignment, and instead to "featurize" our trajectory by computing the pairwise distance between every atom in each frame, and using that as our high dimensional input space for PCA. | Python Code:
%matplotlib inline
from __future__ import print_function
import mdtraj as md
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
Explanation: scikits-learn is a premier machine learning library for python, with a very easy to use API and great documentation.
End of explanation
traj = md.load('ala2.h5')
traj
Explanation: Lets load up our trajectory. This is the trajectory that we generated in
the "Running a simulation in OpenMM and analyzing the results with mdtraj"
example.
End of explanation
pca1 = PCA(n_components=2)
traj.superpose(traj, 0)
reduced_cartesian = pca1.fit_transform(traj.xyz.reshape(traj.n_frames, traj.n_atoms * 3))
print(reduced_cartesian.shape)
Explanation: Create a two component PCA model, and project our data down into this
reduced dimensional space. Using just the cartesian coordinates as
input to PCA, it's important to start with some kind of alignment.
End of explanation
plt.figure()
plt.scatter(reduced_cartesian[:, 0], reduced_cartesian[:,1], marker='x', c=traj.time)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('Cartesian coordinate PCA: alanine dipeptide')
cbar = plt.colorbar()
cbar.set_label('Time [ps]')
Explanation: Now we can plot the data on this projection.
End of explanation
pca2 = PCA(n_components=2)
from itertools import combinations
# this python function gives you all unique pairs of elements from a list
atom_pairs = list(combinations(range(traj.n_atoms), 2))
pairwise_distances = md.geometry.compute_distances(traj, atom_pairs)
print(pairwise_distances.shape)
reduced_distances = pca2.fit_transform(pairwise_distances)
plt.figure()
plt.scatter(reduced_distances[:, 0], reduced_distances[:,1], marker='x', c=traj.time)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('Pairwise distance PCA: alanine dipeptide')
cbar = plt.colorbar()
cbar.set_label('Time [ps]')
Explanation: Lets try cross-checking our result by using a different feature space that isn't sensitive to alignment, and instead to "featurize" our trajectory by computing the pairwise distance between every atom in each frame, and using that as our high dimensional input space for PCA.
End of explanation |
15,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inference
Step1: Now we set up and run a sampling routine using Monomial-Gamma HMC MCMC
Step2: Monomial-Gamma HMC on a time-series problem
We now try the same method on a time-series problem
Step3: The chains do not take long to reach equilibrium with this method.
Step4: Chains have converged!
Extract any divergent iterations -- looks fine as there were none. | Python Code:
import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create log pdf
log_pdf = pints.toy.GaussianLogPDF([2, 4], [[1, 0], [0, 3]])
# Contour plot of pdf
levels = np.linspace(-3,12,20)
num_points = 100
x = np.linspace(-1, 5, num_points)
y = np.linspace(-0, 8, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Inference: Monomial-Gamma Hamiltonian Monte Carlo
This example shows you how to perform Bayesian inference on a Gaussian distribution and a time-series problem, using Monomial-Gamma HMC.
First, we create a simple normal distribution
End of explanation
# Choose starting points for 3 mcmc chains
xs = [
[2, 1],
[3, 3],
[5, 4],
]
# Create mcmc routine
sigma = [1, 1]
mcmc = pints.MCMCController(log_pdf, 3, xs, method=pints.MonomialGammaHamiltonianMCMC, sigma0=sigma)
# Add stopping criterion
mcmc.set_max_iterations(1000)
# Set up modest logging
mcmc.set_log_to_screen(True)
mcmc.set_log_interval(100)
# change 'a' parameter in kinetic energy function used by individual samplers
for sampler in mcmc.samplers():
sampler.set_a(0.5)
# Run!
print('Running...')
full_chains = mcmc.run()
print('Done!')
# Show traces and histograms
import pints.plot
pints.plot.trace(full_chains)
plt.show()
# Discard warm up
chains = full_chains[:, 200:]
# Check convergence and other properties of chains
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=['mean_x', 'mean_y'])
print(results)
# Look at distribution in chain 0
pints.plot.pairwise(chains[0], kde=True)
plt.show()
# Check Kullback-Leibler divergence of chains
print(log_pdf.kl_divergence(chains[0]))
print(log_pdf.kl_divergence(chains[1]))
print(log_pdf.kl_divergence(chains[2]))
Explanation: Now we set up and run a sampling routine using Monomial-Gamma HMC MCMC
End of explanation
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
times = np.linspace(0, 1000, 50)
real_parameters = np.array([0.015, 500])
org_values = model.simulate(real_parameters, times)
# Add noise
np.random.seed(1)
noise = 10
values = org_values + np.random.normal(0, noise, org_values.shape)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)
# Create a uniform prior over the parameters
log_prior = pints.UniformLogPrior(
[0.01, 400],
[0.02, 600]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
xs = [
real_parameters * 1.01,
real_parameters * 0.9,
real_parameters * 1.1,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, len(xs), xs, method=pints.MonomialGammaHamiltonianMCMC)
# Add stopping criterion
mcmc.set_max_iterations(1000)
# Set up modest logging
mcmc.set_log_to_screen(True)
mcmc.set_log_interval(100)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
Explanation: Monomial-Gamma HMC on a time-series problem
We now try the same method on a time-series problem
End of explanation
# Check convergence and other properties of chains
results = pints.MCMCSummary(chains=chains[:, 200:], time=mcmc.time(), parameter_names=['growth rate', 'capacity'])
print(results)
# Show traces and histograms
pints.plot.trace(chains)
plt.show()
Explanation: The chains do not take long to reach equilibrium with this method.
End of explanation
div_iterations = []
for sampler in mcmc.samplers():
div_iterations.append(sampler.divergent_iterations())
print("There were " + str(np.sum(div_iterations)) + " divergent iterations.")
Explanation: Chains have converged!
Extract any divergent iterations -- looks fine as there were none.
End of explanation |
15,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute spatial resolution metrics to compare MEG with EEG+MEG
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference of
distributions.
This example mimics some results from [1]_, namely Figure 3 (peak localisation
error for PSFs, L2-MNE vs dSPM) and Figure 4 (spatial deviation for PSFs,
L2-MNE vs dSPM). It shows that combining MEG with EEG reduces the
point-spread function and increases the spatial resolution of source imaging,
especially for deeper sources.
Step1: EEGMEG
Compute resolution matrices, localization error, and spatial deviations
for MNE
Step2: MEG
Do the same for MEG
Step3: Visualization
Look at peak localisation error (PLE) across the whole cortex for PSF
Step4: These plots show that with respect to peak localization error, adding EEG to
MEG does not bring much benefit. Next let's visualise spatial deviation (SD)
across the whole cortex for PSF | Python Code:
# Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm.resolution_matrix import make_inverse_resolution_matrix
from mne.minimum_norm.spatial_resolution import resolution_metrics
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects/'
fname_fwd_emeg = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif'
# read forward solution with EEG and MEG
forward_emeg = mne.read_forward_solution(fname_fwd_emeg)
# forward operator with fixed source orientations
forward_emeg = mne.convert_forward_solution(forward_emeg, surf_ori=True,
force_fixed=True)
# create a forward solution with MEG only
forward_meg = mne.pick_types_forward(forward_emeg, meg=True, eeg=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution for MEG and EEGMEG
inv_emeg = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward_emeg, noise_cov=noise_cov, loose=0.,
depth=None)
inv_meg = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward_meg, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
Explanation: Compute spatial resolution metrics to compare MEG with EEG+MEG
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference of
distributions.
This example mimics some results from [1]_, namely Figure 3 (peak localisation
error for PSFs, L2-MNE vs dSPM) and Figure 4 (spatial deviation for PSFs,
L2-MNE vs dSPM). It shows that combining MEG with EEG reduces the
point-spread function and increases the spatial resolution of source imaging,
especially for deeper sources.
End of explanation
rm_emeg = make_inverse_resolution_matrix(forward_emeg, inv_emeg,
method='MNE', lambda2=lambda2)
ple_psf_emeg = resolution_metrics(rm_emeg, inv_emeg['src'],
function='psf', metric='peak_err')
sd_psf_emeg = resolution_metrics(rm_emeg, inv_emeg['src'],
function='psf', metric='sd_ext')
del rm_emeg
Explanation: EEGMEG
Compute resolution matrices, localization error, and spatial deviations
for MNE:
End of explanation
rm_meg = make_inverse_resolution_matrix(forward_meg, inv_meg,
method='MNE', lambda2=lambda2)
ple_psf_meg = resolution_metrics(rm_meg, inv_meg['src'],
function='psf', metric='peak_err')
sd_psf_meg = resolution_metrics(rm_meg, inv_meg['src'],
function='psf', metric='sd_ext')
del rm_meg
Explanation: MEG
Do the same for MEG:
End of explanation
brain_ple_emeg = ple_psf_emeg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=1,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_emeg.add_text(0.1, 0.9, 'PLE PSF EMEG', 'title', font_size=16)
brain_ple_meg = ple_psf_meg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=2,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_meg.add_text(0.1, 0.9, 'PLE PSF MEG', 'title', font_size=16)
# Subtract the two distributions and plot this difference
diff_ple = ple_psf_emeg - ple_psf_meg
brain_ple_diff = diff_ple.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=3,
clim=dict(kind='value', pos_lims=(0., .5, 1.)),
smoothing_steps=20)
brain_ple_diff.add_text(0.1, 0.9, 'PLE EMEG-MEG', 'title', font_size=16)
Explanation: Visualization
Look at peak localisation error (PLE) across the whole cortex for PSF:
End of explanation
brain_sd_emeg = sd_psf_emeg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=4,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_emeg.add_text(0.1, 0.9, 'SD PSF EMEG', 'title', font_size=16)
brain_sd_meg = sd_psf_meg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=5,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_meg.add_text(0.1, 0.9, 'SD PSF MEG', 'title', font_size=16)
# Subtract the two distributions and plot this difference
diff_sd = sd_psf_emeg - sd_psf_meg
brain_sd_diff = diff_sd.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=6,
clim=dict(kind='value', pos_lims=(0., .5, 1.)),
smoothing_steps=20)
brain_sd_diff.add_text(0.1, 0.9, 'SD EMEG-MEG', 'title', font_size=16)
Explanation: These plots show that with respect to peak localization error, adding EEG to
MEG does not bring much benefit. Next let's visualise spatial deviation (SD)
across the whole cortex for PSF:
End of explanation |
15,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Listen</h1>
<li>Listen sind eine sequentielle, geordnete Sammlung von Werten, Zahlen oder strg oder boolean oder hashes etc.
['spass',[1,2,4], 3.14, [{1],[2],[3]] in eckigen Klammern
<h2>Listen erzeugen</h2>
Step1: <h3>Hinzufügen von Objekten in Listen</h3>
Step2: <h3>Index und Teilstücke</h3>
Step3: <h3>Entfernen von Objekten aus Listen</h3>
Step4: <h3>Anything you want to remove must be in the list or the location must be inside the list</h3>
Step5: <h2>Listen sind veränderbar [mutable]</h2>
Step6: <h1>Iteration</h1>
<h2>Range iteration</h2>
Step7: <h3>List element iteration</h3>
Step8: <h3>Practice problem</h3>
Write a function search_list that searches a list of tuple pairs and returns the value associated with the first element of the pair
Step9: <h3>Hashes in Listen ablegen</h3>
Step10: <h1>Dictionaries</h1>
<li>d={}
<li>d.values()
<li>d.keys()
<li>d.items()
<li>d.clear()
<li>d.copy()
<li>d.get(k,x)
<li>k in d
<li>d.setdefault(k[ ,x])
<li>d1.update(d2)
Step11: <h3>Beispiel
Step12: <h3>Beispiel
Step13: <h2>Beispiel Studenten - mit dictionary</h2>
Step14: <h2>Schrittweiser Aufbau eines Studentenverezichnisses</h2>
Step15: <h2>Ein Dictionary aus anderen zusammensetzen
<li>d2.update(d1)
Step16: <h2>Datenzugriff in einem dictionary
Step17: <h1>Vokabeltrainer entwickeln | Python Code:
x = [4,2,6,3] #Erzeugt eine Liste mit Werten
x1 = [4,2,6,3] #Erzeugt eine Liste mit den gleichen Werten
y = list() # Erzeugt eine leere Liste
y = [] #Erzeugt eine leere Liste
z = ["11","22","33","a","b","c","d"] #erzeugt eine Liste mit strg Werten
print(x)
print(id(x))
print(x1)
print(id(x1))
print(y)
print(id(y))
print(z)
print(id(z))
Explanation: <h1>Listen</h1>
<li>Listen sind eine sequentielle, geordnete Sammlung von Werten, Zahlen oder strg oder boolean oder hashes etc.
['spass',[1,2,4], 3.14, [{1],[2],[3]] in eckigen Klammern
<h2>Listen erzeugen</h2>
End of explanation
x=list()
print(x)
x.append('One') #Adds 'One' to the back of the empty list
print(x)
x.append('Two') #Adds 'Two' to the back of the list ['One']
print(x)
x.insert(0,'Half') #Inserts 'Half' at location 0. Items will shift to make roomw
print(x)
x=list()
x.extend([1,2,3]) #Unpacks the list and adds each item to the back of the list
print(x)
Explanation: <h3>Hinzufügen von Objekten in Listen</h3>
End of explanation
x=[1,7,2,5,3,5,67,32]
print(len(x))
print(x[3])
print(x[2:5])
print(x[-1])
print(x[::-1])
Explanation: <h3>Index und Teilstücke</h3>
End of explanation
x=[1,7,2,5,3,5,67,32]
x.pop() #Entfernt das letzte Element aus der Liste
print(x)
x.pop(3) #Removes element at item 3 from a list
print(x)
x.remove(7) #Removes the first 7 from the list
print(x)
Explanation: <h3>Entfernen von Objekten aus Listen</h3>
End of explanation
x.remove(20)
Explanation: <h3>Anything you want to remove must be in the list or the location must be inside the list</h3>
End of explanation
y=['a','b']
x = [1,y,3]
print(x)
print(y)
y[1] = 4
print(y)
print(x)
x="Hello"
print(x,id(x))
x+=" You!"
print(x,id(x)) #x is not the same object it was
y=["Hello"]
print(y,id(y))
y+=["You!"]
print(y,id(y)) #y is still the same object. Lists are mutable. Strings are immutable
def eggs(item,total=0):
total+=item
return total
def spam(elem,some_list=[]):
some_list.append(elem)
return some_list
print(eggs(1))
print(eggs(2))
print(spam(1))
print(spam(2))
Explanation: <h2>Listen sind veränderbar [mutable]</h2>
End of explanation
#The for loop creates a new variable (e.g., index below)
#range(len(x)) generates values from 0 to len(x)
x=[1,7,2,5,3,5,67,32]
for index in range(len(x)):
print(x[index])
list(range(len(x)))
Explanation: <h1>Iteration</h1>
<h2>Range iteration</h2>
End of explanation
x=[1,7,2,5,3,5,67,32]
for element in x: #The for draws elements - sequentially - from the list x and uses the variable "element" to store values
print(element)
Explanation: <h3>List element iteration</h3>
End of explanation
def search_list(list_of_tuples,value):
print(range(len(list_of_tuples)))
for element in list_of_tuples:
if element[0]==value:
print(element[1])
return 0
#return(list_of_tuples[index])
#Write the function here
prices = [('AAPL',96.43),('IONS',39.28),('GS',159.53),('AA',160.45)]
ticker = 'AA'
print(search_list(prices,ticker))
Explanation: <h3>Practice problem</h3>
Write a function search_list that searches a list of tuple pairs and returns the value associated with the first element of the pair
End of explanation
import hashlib
m=list()
x=[1,7,2,5,3,5,67,32,32,1,10,11,12,13,14,15,16] #Listen können gleiche Werte enthalten
for element in x: #The for draws elements - sequentially - from the list x and uses the variable "element" to store values
y=str(element) # der Variablen y wird der str Wert zugewiesen
z=hashlib.sha256(y) # dauraus wird für z der sha256 Wert ermittelt
print(z.hexdigest()) # der Wert wird in hexadezimal gedruckt
m.append(z.hexdigest()) # der Wert wird der liste m angefügt
print("Wir haben die Daten in die Liste m gelegt:")
print(m) # die Liste wird gedruckt
Explanation: <h3>Hashes in Listen ablegen</h3>
End of explanation
mktcaps = {'AAPL':538.7,'GOOG':68.7,'IONS':4.6}# Dictionary wird initialisiert
print(type(mktcaps))
print(mktcaps)
print(mktcaps.values())
print(mktcaps.keys())
print(mktcaps.items())
c=mktcaps.items()
print c[0]
mktcaps['AAPL'] #Gibt den Wert zurück der mit dem Schlüssel "AAPL" verknüpft ist
mktcaps['GS'] #Error because GS is not in mktcaps
mktcaps.get('GS') #Returns None because GS is not in mktcaps
mktcaps['GS'] = 88.65 #Fügt GS to the dictionary
print(mktcaps)
del(mktcaps['GOOG']) #Removes GOOG from mktcaps
print(mktcaps)
mktcaps.keys() #Returns all the keys
mktcaps.values() #Returns all the values
import hashlib
l=('AAA','BBB','CCC','DDD','EEE')
print(l)
print(len(l))
hshdict={'AAA':hashlib.sha256('AAA)')}
hshdict.values()
v=hshdict['AAA']
m=v.hexdigest()
print(m)
Explanation: <h1>Dictionaries</h1>
<li>d={}
<li>d.values()
<li>d.keys()
<li>d.items()
<li>d.clear()
<li>d.copy()
<li>d.get(k,x)
<li>k in d
<li>d.setdefault(k[ ,x])
<li>d1.update(d2)
End of explanation
alter = {'Peter':45,'Julia':23,'Mathias':36} #Erzeugen eines Dictionaries
print(alter)
alter['Julia']=27 #Ändern des Alters
alter['Monika']=33 #Hinzufügen von Monika - die Reihenfolge der Schlüssel spielt keine Rolle
print(alter)
if 'Monika' in alter:
print (alter['Monika'])
Explanation: <h3>Beispiel: Alter</h3>
End of explanation
temperatur={'stuttgart':32.9,'muenchen':29.8,'hamburg':24.4}# Erzeugen eines dictionaries mit Temperaturen in verschiedenen Städten
temperatur['koeln']=29.7 #hinzufuegen der temperatur in koeln
print(temperatur) #ausgabe der temperaturen
for stadt in temperatur:
print('Die Temperatur in %s ist %g °C' % (stadt,temperatur[stadt]))
if 'Berlin' in temperatur:
print ('Berlin:', temperatur['Berlin'])
else:
print ('Keine Daten für Berlin gefunden')
'stuttgart' in temperatur #überprüfen ob Schlüssel in temperatur enthalten ist
temperatur.keys() #Ausgabe der Schlüssel im Dictionary
temperatur.values()#ausgabe der Werte im Dictionary
for stadt in sorted(temperatur):
print(stadt)
temperatur_kopie=temperatur.copy() #erstellt eine KOpie des dictonaries
print (temperatur_kopie)
temperatur2={'stuttgart':22.9,'muenchen':23.8,'hamburg':21.4} #ein 2-tes dictionary
temperatur.update(temperatur2)
for stadt in temperatur:
print('Die Temperatur in %s ist %g °C' % (stadt,temperatur[stadt]))
print('Anzahl enthaltene Staedte: %g'% len(temperatur))
temperatur2={'stuttgart':22.9,'muenchen':23.8,'hamburg':21.4,'koeln':18.6,'frankfurt':20.6, 'weimar':18.8} #ein 2-tes dictionary
temperatur.update(temperatur2)
for stadt in temperatur:
print('Die Temperatur in %s ist %g °C' % (stadt,temperatur[stadt]))
print('Anzahl enthaltene Staedte: %g'% len(temperatur))
Explanation: <h3>Beispiel: Temperaturen in Staedten</h3>
End of explanation
st={}#Erzeugen des leeren dictionarys
st['100100'] = {'Mathe':1.0, 'Bwl':2.5}
st['100200'] = {'Mathe':2.3, 'Bwl':1.8}
print(st.items())
print(type(st))
print(st.values())
print(st.keys())
for k in st.keys():
print(st.['k'])
Explanation: <h2>Beispiel Studenten - mit dictionary</h2>
End of explanation
def stud_verz():
stud={}#erzeugen eines leeren dictionaries
student=input('Matrikel-Nr als string eingeben:')
while student:
Mathe = input('Mathe Note eingeben:')
Bwl = input('Bwl Note eingeben:')
stud[student]={"Mathematik":Mathe,"BWL":Bwl}
student=input('Matrikel-Nr als string eingeben:')
return stud
print (stud_verz())
Explanation: <h2>Schrittweiser Aufbau eines Studentenverezichnisses</h2>
End of explanation
d1={'hans':1.8,'peter':1.73,'rainer':1.74}
d2={'petra':1.8,'hannes':1.73,'rainer':1.78}
d1.update(d2)
print(d1)
Explanation: <h2>Ein Dictionary aus anderen zusammensetzen
<li>d2.update(d1)
End of explanation
deutsch = {'key':['Schluessel','Taste'],'slice':['Scheibe','Schnitte','Stueck'],'value':['Wert']}
print(deutsch)
######Abfangen von Abfragefehlern
def uebersetze(wort,d):
if wort in d:
return d[wort]
else:
return 'unbekannt'
print(uebersetze('slice',deutsch))
uebersetze('search',deutsch)
Explanation: <h2>Datenzugriff in einem dictionary
End of explanation
#Vokabeltrainer entwickeln
import random
#Definition der Funktionen
def dict_laden(pfad):
d={}
try:
datei = open(pfad)
liste = datei.readlines()
for eintrag in liste:
l_eintrag = eintrag.split()
d[l_eintrag[0]]=l_eintrag[1:]
datei.close()
except:
pass
return d
#def aufgabe(d):
zufall = random.randint(0, len(d.keys())-1)
vokabel = list(d.keys())[zufall]
#print(vokabel +'?')
#Datei liegt auf dem Pfad
#c:\\Benutzer\\ramon\\Dokumente\\Python Scripts\\python-edx-07-07-17\\woerterbuch.txt'
#woerterbuch liste von einträgen mit leerzeichen getrennt
d={}
datei=open('woerterbuch.txt')
liste = datei.readlines()
print(liste)
for eintrag in liste:
l_eintrag = eintrag.split()#trennung an leerzeichen
#print(l_eintrag[0])
#print(l_eintrag[1])
d[l_eintrag[0]]=l_eintrag[1:]
datei.close()
print(d)
zufall = random.randint(0, len(d.keys())-1)
vokabel = list(d.keys())[zufall]
print(vokabel+' ?')
antwort=input()
Explanation: <h1>Vokabeltrainer entwickeln
End of explanation |
15,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trends
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Trends estimate tendencies in data over time, such as overall rising or falling amid noise. They use only historical data and not any knowledge about the processes generating them.
Linear trend models
A linear trend model assumes that the variable changes at a constant rate with time, and attempts to find a line of best fit. We want to find coefficients $b_0$, $b_1$ such that the series $y_t$ satisfies
$$ y_t = b_0 + b_1t + \epsilon_t $$
and so that the sum of the squares of the errors $\epsilon_t$ is minimized. This can be done using a linear regression. After we have fitted a linear model to our data, we predict the value of the variable to be $y_t = b_0 + b_1 t$ for future time periods $t$. We can also use these parameters to compare the rates of growth or decay of two data series.
Let's find a linear trend model for the price of XLY, an ETF for consumer goods.
Step1: The summary returned by the regression tells us the slope and intercept of the line, as well as giving us some information about how statistically valid the fit is. Note that the Durbin-Watson statistic is very low here, suggeesting that the errors are correlated. The price of this fund is generally increasing, but because of the variance in the data, the line of best fit changes significantly depending on the sample we take. Because small errors in our model magnify with time, its predictions far into the future may not be as good as the fit statistics would suggest. For instance, we can see what will happen if we find a model for the data through 2012 and use it to predict the data through 2014.
Step2: Of course, we can keep updating our model as we go along. Below we use all the previous prices to predict prices 30 days into the future.
Step3: Log-linear trend models
A log-linear trend model attempts to fit an exponential curve to a data set
Step4: In some cases, however, a log-linear model clearly fits the data better. | Python Code:
import numpy as np
import math
from statsmodels import regression
import statsmodels.api as sm
import matplotlib.pyplot as plt
start = '2010-01-01'
end = '2015-01-01'
asset = get_pricing('XLY', fields='price', start_date=start, end_date=end)
dates = asset.index
def linreg(X,Y):
# Running the linear regression
x = sm.add_constant(X)
model = regression.linear_model.OLS(Y, x).fit()
a = model.params[0]
b = model.params[1]
# Return summary of the regression and plot results
X2 = np.linspace(X.min(), X.max(), 100)
Y_hat = X2 * b + a
plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression line, colored in red
return model.summary()
_, ax = plt.subplots()
ax.plot(asset)
ticks = ax.get_xticks()
ax.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates
linreg(np.arange(len(asset)), asset)
Explanation: Trends
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Trends estimate tendencies in data over time, such as overall rising or falling amid noise. They use only historical data and not any knowledge about the processes generating them.
Linear trend models
A linear trend model assumes that the variable changes at a constant rate with time, and attempts to find a line of best fit. We want to find coefficients $b_0$, $b_1$ such that the series $y_t$ satisfies
$$ y_t = b_0 + b_1t + \epsilon_t $$
and so that the sum of the squares of the errors $\epsilon_t$ is minimized. This can be done using a linear regression. After we have fitted a linear model to our data, we predict the value of the variable to be $y_t = b_0 + b_1 t$ for future time periods $t$. We can also use these parameters to compare the rates of growth or decay of two data series.
Let's find a linear trend model for the price of XLY, an ETF for consumer goods.
End of explanation
# Take only some of the data in order to see how predictive the model is
asset_short = get_pricing('XLY', fields='price', start_date=start, end_date='2013-01-01')
# Running the linear regression
x = sm.add_constant(np.arange(len(asset_short)))
model = regression.linear_model.OLS(asset_short, x).fit()
X2 = np.linspace(0, len(asset), 100)
Y_hat = X2 * model.params[1] + model.params[0]
# Plot the data for the full time range
_, ax = plt.subplots()
ax.plot(asset)
ticks = ax.get_xticks()
ax.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates
# Plot the regression line extended to the full time range
ax.plot(X2, Y_hat, 'r', alpha=0.9);
Explanation: The summary returned by the regression tells us the slope and intercept of the line, as well as giving us some information about how statistically valid the fit is. Note that the Durbin-Watson statistic is very low here, suggeesting that the errors are correlated. The price of this fund is generally increasing, but because of the variance in the data, the line of best fit changes significantly depending on the sample we take. Because small errors in our model magnify with time, its predictions far into the future may not be as good as the fit statistics would suggest. For instance, we can see what will happen if we find a model for the data through 2012 and use it to predict the data through 2014.
End of explanation
# Y_hat will be our predictions for the price
Y_hat = [0]*1100
# Start analysis from day 100 so that we have historical prices to work with
for i in range(100,1200):
temp = asset[:i]
x = sm.add_constant(np.arange(len(temp)))
model = regression.linear_model.OLS(temp, x).fit()
# Plug (i+30) into the linear model to get the predicted price 30 days from now
Y_hat[i-100] = (i+30) * model.params[1] + model.params[0]
_, ax = plt.subplots()
ax.plot(asset[130:1230]) # Plot the asset starting from the first day we have predictions for
ax.plot(range(len(Y_hat)), Y_hat, 'r', alpha=0.9)
ticks = ax.get_xticks()
ax.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates;
Explanation: Of course, we can keep updating our model as we go along. Below we use all the previous prices to predict prices 30 days into the future.
End of explanation
def loglinreg(X,Y):
# Running the linear regression on X, log(Y)
x = sm.add_constant(X)
model = regression.linear_model.OLS(np.log(Y), x).fit()
a = model.params[0]
b = model.params[1]
# Return summary of the regression and plot results
X2 = np.linspace(X.min(), X.max(), 100)
Y_hat = (math.e)**(X2 * b + a)
plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression curve, colored in red
return model.summary()
_, ax_log = plt.subplots()
ax_log.plot(asset)
ax_log.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates
loglinreg(np.arange(len(asset)), asset)
Explanation: Log-linear trend models
A log-linear trend model attempts to fit an exponential curve to a data set:
$$ y_t = e^{b_0 + b_1 t + \epsilon_t} $$
To find the coefficients, we can run a linear regression on the equation $ \ln y_t = b_0 + b_1 t + \epsilon_t $ with variables $t, \ln y_t$. (This is the reason for the name of the model — the equation is linear when we take the logarithm of both sides!)
If $b_1$ is very small, then a log-linear curve is approximately linear. For instance, we can find a log-linear model for our data from the previous example, with fit statistics approximately the same as for the linear model.
End of explanation
start2 = '2002-01-01'
end2 = '2012-06-01'
asset2 = get_pricing('AAPL', fields='price', start_date=start2, end_date=end2)
dates2 = asset2.index
_, ax2 = plt.subplots()
ax2.plot(asset2)
ticks2 = ax2.get_xticks()
ax2.set_xticklabels([dates2[i].date() for i in ticks2[:-1]]) # Label x-axis with dates
loglinreg(np.arange(len(asset2)), asset2)
Explanation: In some cases, however, a log-linear model clearly fits the data better.
End of explanation |
15,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1T_os, shutil 모듈을 이용한 파일,폴더 관리하기 (1) - 폴더 생성 및 제거
영화별 매출 - Revenue per Film 이거 어려워. 이거 뽑아 보겠음
데이터를 저장하고 관리하기 위해서 os, shutil - python 내장 라이브러리를 쓸 것임
각 국가별 이름으로 (korea.csv / japan.csv...) 저장하는 거를 할 것임
1T에는 os module, shutil module로 파일, 폴더, 압축파일(데이터) 등을 저장하고 읽고 쓰고 관리할 것임. 파이썬으로만
Step1: ipynb 라는 확장자로 끝나는 파일들만 가지고 오려면
Step2: 파일에 대한 경로를 생성할 때
현재 폴더 안에 있는, "data"라는 폴더의 "data.csv"의 경로
"data/data.csv"
"./data/data.csv" ( String 으로 입력할 때 이렇게 직접 ) // 무조건 이 방법으로
"/home/python/notebooks/dobestan/dss/....../data/data.csv" - 절대 경로 // 잘 안 씀
Step3: os.curdir #current directory
os.path.join(...)
os.listdir(...)
Step4: 폴더를 만들 때, os.listdir()로 특정 폴더가 있는지 확인한 후에, 만약 있으면 삭제하고 새로운 폴더를 생성한다.
폴더를 지울 때, 만약에
Step5: 설정 파일 같은 것을 수정하거나 삭제할 때
만약에 .bash_profile => .bash_profile.tmp / ... (복사해주고 작업을 한다.)
복구는 안 된다.
위와 같은 과정의 flow는 어려워
그래서 shutil이라는 파이썬 내장 모듈을 사용할 것임
Step6: os - low-level (저수준)으로 파일/폴더/운영체제를 관리했다면
shutil - high-level (고수준) 으로 파일/폴더를 관리
Step7: 1. 국가명.csv 파일로 만들기 => world.tar.gz (world.zip) 압축하기
2. 대륙명/국가명.csv 파일로 만들기 => 대륙명.tar.gz 압축하기
ex) Angola.csv -- 도시 정보 csv파일이 국가별로 있어야 합니다. ("data/world/____.csv" 이 200개 정도 있어야 함)
Step9: df.to_csv(os.path.join(, , ___.csv))
df.to_csv("./data/world/Angola.csv") | Python Code:
import os
#os 모듈을 통해서
#운영체제 레벨(서버는 ex.우분투)에서 다루는 파일 폴더 생성하고 삭제하기가 가능
#기존에는 ("../../~~") 이런 식으로 경로를 직접 입력 했으나
os.listdir()
#현재 폴더 안에 있는 파일들을 리스트로 뽑는 것
os.listdir("../")
for csv_file in os.listdir("../"):
pass
Explanation: 1T_os, shutil 모듈을 이용한 파일,폴더 관리하기 (1) - 폴더 생성 및 제거
영화별 매출 - Revenue per Film 이거 어려워. 이거 뽑아 보겠음
데이터를 저장하고 관리하기 위해서 os, shutil - python 내장 라이브러리를 쓸 것임
각 국가별 이름으로 (korea.csv / japan.csv...) 저장하는 거를 할 것임
1T에는 os module, shutil module로 파일, 폴더, 압축파일(데이터) 등을 저장하고 읽고 쓰고 관리할 것임. 파이썬으로만
End of explanation
[
file_name
for file_name
in os.listdir("../01일차.수_입문/")
if file_name.endswith(".ipynb") # csv 파일 가져오기, 엑셀 파일 가져오기로 사용
]
Explanation: ipynb 라는 확장자로 끝나는 파일들만 가지고 오려면
End of explanation
os.path.join("data", "data.csv")
os.curdir
os.path.join(os.curdir, "data", "data.csv")
# 이렇게 하면 경로를 알려줘. 앞으로 만들 때는 무조건 이렇게 만들겠다.
# os.path.join(os.curdir, "data", file_name)
Explanation: 파일에 대한 경로를 생성할 때
현재 폴더 안에 있는, "data"라는 폴더의 "data.csv"의 경로
"data/data.csv"
"./data/data.csv" ( String 으로 입력할 때 이렇게 직접 ) // 무조건 이 방법으로
"/home/python/notebooks/dobestan/dss/....../data/data.csv" - 절대 경로 // 잘 안 씀
End of explanation
os.makedirs("data") #잠재적인 문제가 있다.
os.listdir() #폴더 만들기는 쉽게 됩니다.
os.rmdir("data") #잠재적인 문제가 있다.
os.listdir()
Explanation: os.curdir #current directory
os.path.join(...)
os.listdir(...)
End of explanation
os.makedirs("data") # DATA라는 폴더 안에 간단한 텍스트 파일 만들기
os.listdir(os.path.join(os.curdir,"data"))
os.rmdir("data")
# 폴더 안에 파일이 있으면 삭제가 안 된다
# os.listdir()로 찾아본 다음에 폴더면 또 들어가서 다시 재귀적으로 찾아보고,
# 파일이면 삭제하고 상위폴더로 올라와서 그리고 rmdir() ...
Explanation: 폴더를 만들 때, os.listdir()로 특정 폴더가 있는지 확인한 후에, 만약 있으면 삭제하고 새로운 폴더를 생성한다.
폴더를 지울 때, 만약에
End of explanation
import shutil
Explanation: 설정 파일 같은 것을 수정하거나 삭제할 때
만약에 .bash_profile => .bash_profile.tmp / ... (복사해주고 작업을 한다.)
복구는 안 된다.
위와 같은 과정의 flow는 어려워
그래서 shutil이라는 파이썬 내장 모듈을 사용할 것임
End of explanation
os.listdir(os.path.join(os.curdir, "data"))
shutil.rmtree(os.path.join(os.curdir, "data"))
os.listdir(os.path.join(os.curdir, "data"))
os.makedirs(os.path.join(os.curdir, "data"))
shutil.rmtree(os.path.join(os.curdir, "data"))
Explanation: os - low-level (저수준)으로 파일/폴더/운영체제를 관리했다면
shutil - high-level (고수준) 으로 파일/폴더를 관리
End of explanation
os.makedirs(os.path.join(os.curdir, "data"))
os.makedirs(os.path.join(os.curdir, "data", "world"))
# 만약 "data", "world"라는 폴더가 있으면, 삭제하는 기능 ...
Explanation: 1. 국가명.csv 파일로 만들기 => world.tar.gz (world.zip) 압축하기
2. 대륙명/국가명.csv 파일로 만들기 => 대륙명.tar.gz 압축하기
ex) Angola.csv -- 도시 정보 csv파일이 국가별로 있어야 합니다. ("data/world/____.csv" 이 200개 정도 있어야 함)
End of explanation
# 폴더의 유무를 확인하고, 있으면 삭제한다.
if "data" in os.listdir():
print("./data/ 폴더를 삭제합니다.")
shutil.rmtree(os.path.join(os.curdir, "data"))
# "data"라는 폴더를 생성하고, 그 안에 "world"라는 폴더를 생성한다.
print("./data/ 폴더를 생성합니다.")
os.makedirs(os.path.join(os.curdir, "data"))
os.makedirs(os.path.join(os.curdir, "data", "world"))
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"world",
charset='utf8'
)
country_df = pd.read_sql("SELECT * FROM Country;", db)
city_df = pd.read_sql("SELECT * FROM City;", db)
#Country.Code를 바탕으로, City.CountryCode와 매칭해서 찾아야 함
#Country.Name은 반드시 가지고 와야지 파일명으로 저장이 가능
city_groups = city_df.groupby("CountryCode")
for index, row in country_df.iterrows():
country_code = row["Code"]
country_name = row["Name"]
city_df = city_groups.get_group(country_code)
city_df.to_csv(os.path.join("data", "world", "{country_name},csv".format(country_name=country_name)))
#"ATA"라는 애가 없다고 나오니까 테스트
SQL_QUERY =
SELECT *
FROM City
WHERE CountryCode = "ATA"
;
pd.read_sql(SQL_QUERY, db)
city_groups.get_group("ATA")
"ATA" in city_groups["CountryCode"].unique()
#없는게 증명 됐으니 if문 첨가
for index, row in country_df.iterrows():
country_code = row["Code"]
country_name = row["Name"]
if country_code in city_df["CountryCode"].unique():
one_city_df = city_groups.get_group(country_code)
one_city_df.to_csv(os.path.join(os.curdir, "data", "world", "{country_name}.csv".format(country_name=country_name)))
Explanation: df.to_csv(os.path.join(, , ___.csv))
df.to_csv("./data/world/Angola.csv")
End of explanation |
15,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Forecasting
Step1: Polish Weather Data
Polish Weather dataset contains 7 weather related measurements in the Warshaw area, taken between 1999 and 2004. The readings are taken daily.
Importing the data
The data is provided in the file weatherdata.csv. Use read_csv() module from pandas to import the data.
Step2: Univariate time series forecasting
For the sake of simplicity, we will use only MaxTemp variable to perform time series forecasting.
Also, we are not interested in the exact time here, but the sequence itself. We ignore the "Date", still the indices of the dataframe preserves the order. So, the index values will be considered as time stamps. This will satisfy our requirement and simplify the problem.
For this exercise create a new dataframe from the "MaxTemp" column.
Step3: Visualization
A time series is always dependent on time. So, implicitly here we have two variables, time and the "MaxTemp". You can use DataFrame.plot.line() to create a line plot of the time series. Indices will serve as time stamps.
Step4: Handling time series with Python Pandas
Python Pandas library is an efficient tool to work with time series data. Please see this link for all the detailed functionalities.
Pandas shift() method
A key method to help transform time series data into a supervised learning problem is the Pandas DataFrame.shift() method. Given a DataFrame, the shift() method can be used to create copies of columns that are pushed forward or pulled back.
This is the behavior required to create columns of lag observations as well as columns of forecast observations for a time series dataset in a supervised learning format.
Notes
Step5: Checkpoint
Step6: Step 2
Step7: Observe how the NaN values are added.
Step 3
Step8: Using a loop
Above approach works for a small and known history and horizon. To make it automatic, use a for loop.
Notes
Step9: Handling NaN values
We will ignore the rows containing NaN values. Use DataFrame.dropna() method to do this.
You have to reset indices using DataFrame.reset_index() method. By setting the attribute drop True, it overwrites the old indices.
In both the above methods, by setting inplace to True, the index reset is done on the dataframe itself, without returning anything.
Step10: To numpy matrices
All machine learning algorithm implementations work efficiently with numpy matrices and arrays. The current format of our data is panda dataframes. Fortunately, pandas provides DataFrame.as_matrix() method to convert dataframes to numpy arrays.
Step11: Our goal is to model time series forecasting as a machine learning regression task. It is time to identify the inputs and outputs
Question
Step12: Putting all together
To organize everything together and to reuse later, define a method regression_matrix. This method should accept a dataframe having only one feature, history and horizon. It should then convert the dataframe into two regression matrices, input X and output y.
Step13: Univariate Time Series Forecasting with Linear Regression
Finally, we have all the modules necessary to train a regression model.
Forecast horizon is to be fixed in the beginning based on the requirement. We will first try to predict only 1 step in the future, that is we set horizon = 1. We will set history=20.
Step14: Training Set and Test Set
A portion of the dataset is to be set aside to be used only for testing. Fortunately sklearn provides a train_test_split() module to do that. You can specify a ratio for test_size parameter.
In this exercise, we will retain $20\%$ data for testing.
Step15: LinearRegression
Train a LinearRegression model.
1. Create an instance of LinearRegression of sklearn.
1. Use training set X_train and y_train to fit.
2. Use test set X_test and y_test to measure $R^2$ using score method.
Step16: Visualizing the forecasted values
We will use native matplotlib.pyplot.plot() method to plot original y_test against y_predicted.
Note
Step17: Multistepahead Forecasting
Step18: Multi target regression using LinearRegression
Step19: Multi target regression using RandomForestRegressor | Python Code:
# Write code to import required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# For visualzing plots in this notebook
%matplotlib inline
Explanation: Time Series Forecasting: Application of Regression
A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data.
In this course you seen various data till date. A time series dataset is different. Time series adds an explicit order dependence between observations: a time dimension. This additional dimension is both a constraint and a structure that provides a source of additional information.
Time series forecasting is the use of a model to predict future values of a time series based on previously observed values. In order to do that, first one needs to understand or model the stochastic mechanisms that gives rise to an observed series. Then this model can be used to predict or forecast the future values of a series based on the history of that series.
Modelling timeseries forecasting as a machine learning regression task
Assume we have a timeseries: $x_1, x_2, x_3, \ldots, x_N$
We have observed $T$ values, and wish to predict future $T'$ values. We want to model the relation:
$$ \underbrace{(x_{T+1}, x_{T+2}, \ldots, x_{T+T'})}{T'\text{ is forecast horizon}} = r(\underbrace{x_1, x_2, \ldots, x_T}{T\text{ is history}}) $$
$T$ is called history and $T'$ is called forecast horizon.
Constructing a regression matrix
Standard approach is to consider a moving window over the timeseries to construct following matrix:
$$ X = \overbrace{\begin{bmatrix}
x_1 & x_2 & \dots & x_T \
x_2 & x_3 & \dots & x_{T+1} \
\vdots & \vdots & \ddots & \vdots \
x_{N-T-T'+1} & x_{N-T-T'+2} & \dots & x_{N-T'}
\end{bmatrix}}^{input}
\quad
Y = \overbrace{\begin{bmatrix}
x_{T+1} & x_{T+2} & \dots & x_{T+T'} \
x_{T+2} & x_{T+3} & \dots & x_{T+T'+1} \
\vdots & \vdots & \ddots & \vdots \
x_{N-T'+1} & x_{N-T'+2} & \dots & x_{N}
\end{bmatrix}}^{output}
$$
We will learn the muti-input multi-output (MIMO) regression relation: $Y = r(X)$
Objective
Objectives of this notebook are:
1. Converting a time series into a regression matrix using Python Pandas library.
2. Visualization of time series data.
3. Forecasting time series data by using Python Scikit Learn Regression modules.
For questions, comments and suggestions, please contact parantapa[dot]goswami[at]viseo[dot]com
Import basic libraries
Initially we require:
1. pandas: to store data efficiently
2. numpy: to matrix operations
3. matplotlib.pyplot: for data visualization
End of explanation
# We start by importing the data using pandas
# Hint 1: use "read_csv" method, Note that comma (",") is the field separator
# Hint 2: this data file already includes a header
weather = pd.read_csv("weatherdata.csv", sep=",")
# We sneak peek into the data
# Hint: use dataframe "head" method with "n" parameter
weather.head(n=5)
Explanation: Polish Weather Data
Polish Weather dataset contains 7 weather related measurements in the Warshaw area, taken between 1999 and 2004. The readings are taken daily.
Importing the data
The data is provided in the file weatherdata.csv. Use read_csv() module from pandas to import the data.
End of explanation
# Write code to create a new dataframe from weather
# Hint: call "pandas.DataFrame" and pass the desired column of weather
temperature = pd.DataFrame(weather["MaxTemp"])
# To check if all is good
temperature.head()
Explanation: Univariate time series forecasting
For the sake of simplicity, we will use only MaxTemp variable to perform time series forecasting.
Also, we are not interested in the exact time here, but the sequence itself. We ignore the "Date", still the indices of the dataframe preserves the order. So, the index values will be considered as time stamps. This will satisfy our requirement and simplify the problem.
For this exercise create a new dataframe from the "MaxTemp" column.
End of explanation
# Write code to generate a line plot of temperature
# Hint: use "DataFrame.plot.line()" on our dataframe temperature
temperature.plot.line()
Explanation: Visualization
A time series is always dependent on time. So, implicitly here we have two variables, time and the "MaxTemp". You can use DataFrame.plot.line() to create a line plot of the time series. Indices will serve as time stamps.
End of explanation
# Write code to try "shift()" method to push forward the data 1 step
# Hint: use head() at the end of youe code to see first lines
temperature.shift(1).head()
# Write code to try "shift()" method to pull back the data 1 step
# Hint: use head() at the end of youe code to see first lines
temperature.shift(-1).head()
Explanation: Handling time series with Python Pandas
Python Pandas library is an efficient tool to work with time series data. Please see this link for all the detailed functionalities.
Pandas shift() method
A key method to help transform time series data into a supervised learning problem is the Pandas DataFrame.shift() method. Given a DataFrame, the shift() method can be used to create copies of columns that are pushed forward or pulled back.
This is the behavior required to create columns of lag observations as well as columns of forecast observations for a time series dataset in a supervised learning format.
Notes:
1. shift() takes a positive integer $k$ to push forward the data $k$ steps (rows of NaN values added to the front)
2. shift() takes a positive integer $-k$ to pull back the data $k$ steps (rows of NaN values added to the end)
End of explanation
# Write code to create an empty DataFrame
# Hint: "pandas.DataFrame()" without any arguments creates an empty DataFrame
reg_mat = pd.DataFrame()
Explanation: Checkpoint: Make sure you understand how shift() works.
Creating the Regression Matrix
Now you will use shift() multiple times to create a regression matrix from the temperature dataframe.
Step 1: Create a new empty DataFrame to store the regression matrix.
End of explanation
# Write code to generate columns t-4, t-3, t-2, t-1, t IN THIS ORDER for reg_mat
# Hint: you do not need any shift to store the column "t"
reg_mat["t-4"] = temperature.shift(4)
reg_mat["t-3"] = temperature.shift(3)
reg_mat["t-2"] = temperature.shift(2)
reg_mat["t-1"] = temperature.shift(1)
reg_mat["t"] = temperature
# To check if all is good
reg_mat.head(10)
Explanation: Step 2: Assume the history $T = 5$. So, you have to use shift() five times. Each such shift will generate a new column of the regression matrix.
Note that you have to maintain the order as shown in the above equations. If we assume that "t" is the current time, the history columns of the regression matrix should be in the following order:
$$t-4, t-3, t-2, t-1, t$$
To get the "t-i" column, the time series needs to get pushed forward i steps. For each shift, you should store the newly generated column in the reg_mat dataframe with column name "t-i"
End of explanation
# Write code to generate columns t+1, t+2 IN THIS ORDER for reg_mat
reg_mat["t+1"] = temperature.shift(-1)
reg_mat["t+2"] = temperature.shift(-2)
# To check if all is good
reg_mat.head(10)
Explanation: Observe how the NaN values are added.
Step 3: Assume horizon $T' = 2$. This time you have to use shift() 2 times in the other direction. Again you need to maintain the order. This time the generated columns will be "t+1" and "t+2". The final ordering of columns of the entire regression matrix should be:
$$t-4, t-3, t-2, t-1, t, t+1, t+2$$
To get the "t+i" column, the time series needs to get pulled back i steps. For each shift, you should store the newly generated column in the reg_mat dataframe with column name "t+i"
End of explanation
history = 5
horizon = 2
# STEP 1: Create an empty DataFrame
reg_mat = pd.DataFrame()
# STEP 2: For loop to generate the history columns
# Hint 1: use "reversed()" method to reverse the output of "range()" method
# Hint 2: generate column names by adding loop variable i with string "t-"
for i in reversed(range(history)):
column_name = "t-" + str(i)
reg_mat[column_name] = temperature.shift(i)
# Generating the column "t"
reg_mat["t"] = temperature
# STEP 3: For loop to generate the forecast/future columns
# Hint: generate column names by adding loop variable i with string "t+"
for i in range(1, horizon+1):
column_name = "t+" + str(i)
reg_mat[column_name] = temperature.shift(-i)
# To check if all is good
reg_mat.head(10)
Explanation: Using a loop
Above approach works for a small and known history and horizon. To make it automatic, use a for loop.
Notes:
1. While generating the history columns, you have to run the loop backward. As in our above example of history $T = 5$, the loop should run for i = 4 down to i = 1.
2. The column "t" is the original time series itself.
3. While generating the forecast/future columns, the loop should run as usual. As in our exmple of horizon $T' = 2$, the loop should run for i = 1 to i = 2.
Use python range() method wisely to control the loops. Also you have to generate the column names dynamically. Carefully choose the positive or negative values for shift() methods.*
End of explanation
# Write code to drop rows with NaN values from reg_mat inplace
reg_mat.dropna(inplace=True)
# Write code to reset index of reg_mat inplace, with dropping old indices
reg_mat.reset_index(drop=True, inplace=True)
# To check if all is good
reg_mat.head(10)
Explanation: Handling NaN values
We will ignore the rows containing NaN values. Use DataFrame.dropna() method to do this.
You have to reset indices using DataFrame.reset_index() method. By setting the attribute drop True, it overwrites the old indices.
In both the above methods, by setting inplace to True, the index reset is done on the dataframe itself, without returning anything.
End of explanation
# Write code to convert entire reg_mat into a numpy matrix
reg_mat_numpy = reg_mat.as_matrix()
Explanation: To numpy matrices
All machine learning algorithm implementations work efficiently with numpy matrices and arrays. The current format of our data is panda dataframes. Fortunately, pandas provides DataFrame.as_matrix() method to convert dataframes to numpy arrays.
End of explanation
# Write code to cerate input matrix X by selecting correct columns of reg_mat_numpy
# Hint: first k columns of a numpy matrix M can be selected by M[:,:k]
X = reg_mat_numpy[:, :history]
# Write code to cerate output matrix y by selecting correct columns of reg_mat_numpy
# Hint: last k columns of a numpy matrix M can be selected by M[:,-k:]
y = reg_mat_numpy[:, -horizon:]
Explanation: Our goal is to model time series forecasting as a machine learning regression task. It is time to identify the inputs and outputs
Question: What is your input X here? What is your output y here?
End of explanation
# Write a function to put everything together
# TO DELETE
def regression_matrix(df, history, horizon):
reg_mat = pd.DataFrame()
for i in reversed(range(history)):
column_name = "t-" + str(i)
reg_mat[column_name] = temperature.shift(i)
reg_mat["t"] = temperature
for i in range(1, horizon+1):
column_name = "t+" + str(i)
reg_mat[column_name] = temperature.shift(-i)
reg_mat.dropna(inplace=True)
reg_mat.reset_index(drop=True, inplace=True)
reg_mat_numpy = reg_mat.as_matrix()
X = reg_mat_numpy[:, :history]
y = reg_mat_numpy[:, -horizon:]
return X, y
Explanation: Putting all together
To organize everything together and to reuse later, define a method regression_matrix. This method should accept a dataframe having only one feature, history and horizon. It should then convert the dataframe into two regression matrices, input X and output y.
End of explanation
# Write code to generate X and y matrices from dataframe temperature
# Set history to 20 and horizon to 1
# Hint: use the method "regression_matrix" you just wrote.
history, horizon = 20, 1
X, y = regression_matrix(temperature, history, horizon)
Explanation: Univariate Time Series Forecasting with Linear Regression
Finally, we have all the modules necessary to train a regression model.
Forecast horizon is to be fixed in the beginning based on the requirement. We will first try to predict only 1 step in the future, that is we set horizon = 1. We will set history=20.
End of explanation
# Importing the module
from sklearn.model_selection import train_test_split
# Write code for splitting the data into train and test sets.
# Hint: use "train_test_split" on X and y, and test size should be 0.2 (20%)
X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2)
Explanation: Training Set and Test Set
A portion of the dataset is to be set aside to be used only for testing. Fortunately sklearn provides a train_test_split() module to do that. You can specify a ratio for test_size parameter.
In this exercise, we will retain $20\%$ data for testing.
End of explanation
# Write code to import LinearRegression from sklearn
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
R2 = lin_reg.score(X_test, y_test)
print("Linear Regression R2 = ", R2)
Explanation: LinearRegression
Train a LinearRegression model.
1. Create an instance of LinearRegression of sklearn.
1. Use training set X_train and y_train to fit.
2. Use test set X_test and y_test to measure $R^2$ using score method.
End of explanation
# Write code to predict values for test set using LinearREgression.predict() method.
y_predicted = lin_reg.predict(X_test)
# Generating x-axis points
x_points = np.arange(y_test.shape[0])
# Write code to visualize y_test and y_predicted in a single plot
# Hint 1: use matplotlib.pyplot.plot() method
# Hint 2: choose different colors for 2 curves
plt.plot(x_points, y_test[:,0], "b--")
plt.plot(x_points, y_predicted[:,0], "r--")
Explanation: Visualizing the forecasted values
We will use native matplotlib.pyplot.plot() method to plot original y_test against y_predicted.
Note: X axis points are to be generated for plotting. It is simply range of integers 1...len(y_test).
End of explanation
# Write code to generate X and y matrices.
# This time set history to 20 and horizon to 4
history, horizon = 20, 4
X, y = regression_matrix(temperature, history, horizon)
# Write code for splitting the data into train and test sets.# TO DELETE
Explanation: Multistepahead Forecasting: $T' > 1$
Let's repeat the above procedure for forecast horizon $4$, i.e. to predict values till next $4$ time stamps.
Python Scikit-Learn provides a wrapper module called MultiOutputRegressor for multi target regression. You can pass it a standard sklearn regressor instance directly, which will be used as a base.
We will use it with LinearRegression and RandomForestRegressor
End of explanation
# Write code to import MultiOutputRegressor module
from sklearn.multioutput import MultiOutputRegressor
# Write code to train a MultiOutputRegressor model using LinearRegression.
# Test its performance on the test set
lin_reg = MultiOutputRegressor(LinearRegression())
lin_reg.fit(X_train, y_train)
lin_reg_R2 = lin_reg.score(X_test, y_test)
print("Linear Regression R2 = ", lin_reg_R2)
Explanation: Multi target regression using LinearRegression
End of explanation
# Write code to import necessary module for RandomForestRegressor
from sklearn.ensemble import RandomForestRegressor
# Write code to train a MultiOutputRegressor model using RandomForestRegressor.
# Test its performance on the test set
rfr = MultiOutputRegressor(RandomForestRegressor())
rfr.fit(X_train, y_train)
rfr_R2 = rfr.score(X_test, y_test)
print("Random Forest Regressor R2 = ", rfr_R2)
Explanation: Multi target regression using RandomForestRegressor
End of explanation |
15,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate generation capacity by month
This notebook uses the december 2017 EIA-860m file to determine operable generating capacity by fuel category in every month from 2001-2017.
Because this method uses EIA-860 data on individual plants and generators it does not include the capacity of small scale or distributed solar. EIA does provide an estimate of small scale solar as part of the Electric Power Monthly (Table 6.1.A), although I am not sure if it is supposed to represent all installed non-utility solar in the US.
Instructions
The most recent EIA-860m file should be downloaded to the EIA downloads folder, and the correct file name should be used for loading data. Otherwise the code below can be run straight through as-is.
Step1: Type of capacity
The default is net summer capacity, which is what EIA uses for their capacity factor calculations. eGRID uses nameplate capacity. Valid parameter values are
Step2: Load monthly EIA-860 data
Step3: Clean up column names and only keep desired columns
Step4: Read fuel category definitions and apply to the generators
Step5: Load the NERC region each power plant is in and add to dataframes
Step6: Need to make this into a dictionary of dictionaries of lists (year, nerc, plant ids)
Step7: Determine operable capacity in every month
Step8: Determine natural gas capacity by prime mover type in each month | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import os
import pathlib
from pathlib import Path
import sys
from os.path import join
import json
import calendar
sns.set(style='white')
idx = pd.IndexSlice
Explanation: Calculate generation capacity by month
This notebook uses the december 2017 EIA-860m file to determine operable generating capacity by fuel category in every month from 2001-2017.
Because this method uses EIA-860 data on individual plants and generators it does not include the capacity of small scale or distributed solar. EIA does provide an estimate of small scale solar as part of the Electric Power Monthly (Table 6.1.A), although I am not sure if it is supposed to represent all installed non-utility solar in the US.
Instructions
The most recent EIA-860m file should be downloaded to the EIA downloads folder, and the correct file name should be used for loading data. Otherwise the code below can be run straight through as-is.
End of explanation
capacity_type = 'net summer capacity (mw)'
# capacity_type = 'nameplate capacity (mw)'
%load_ext watermark
%watermark -iv -v
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
# add the 'src' directory as one where we can import modules
cwd = os.getcwd()
src_dir = join(cwd, os.pardir, 'src')
sys.path.append(src_dir)
data_path = Path(cwd, '..', 'Data storage')
%aimport Analysis.index
from Analysis.index import group_fuel_cats
%aimport Analysis.capacity
from Analysis.capacity import monthly_capacity_all, monthly_capacity_year
from Analysis.capacity import monthly_ng_type_all, monthly_ng_type_year
Explanation: Type of capacity
The default is net summer capacity, which is what EIA uses for their capacity factor calculations. eGRID uses nameplate capacity. Valid parameter values are:
- nameplate capacity (mw)
- net summer capacity (mw)
- net winter capacity (mw)
End of explanation
file_path = data_path / 'EIA downloads' / 'december_generator2017.xlsx'
op = pd.read_excel(file_path, sheet_name='Operating', skiprows=1, skip_footer=1,
parse_dates={'op datetime': [14, 15]},
na_values=' ')
# need to make some helper functions for the retired generators sheet
def bad_month_values(month):
'Change value to 1 if outside 1-12'
if month > 12 or month < 1:
new_month = 1
else:
new_month = month
return new_month
def make_dt_col(df, month_col, year_col):
months = df[month_col].astype(str)
years = df[year_col].astype(str)
dt_string = years + '-' + months + '-' + '01'
dt = pd.to_datetime(dt_string)
return dt
ret = pd.read_excel(file_path, sheet_name='Retired', skiprows=1, skip_footer=1,
converters={'Operating Month': bad_month_values},
# parse_dates={'op datetime': [16, 17],
# 'ret datetime': [14, 15]},
na_values=' ')
ret['op datetime'] = make_dt_col(ret, 'Operating Month', 'Operating Year')
ret['ret datetime'] = make_dt_col(ret, 'Retirement Month', 'Retirement Year')
Explanation: Load monthly EIA-860 data
End of explanation
op.columns = op.columns.str.strip()
ret.columns = ret.columns.str.strip()
op_cols = [
'Plant ID', 'Nameplate Capacity (MW)', 'Net Summer Capacity (MW)',
'Energy Source Code', 'Prime Mover Code', 'op datetime'
]
ret_cols = [
'Plant ID', 'Nameplate Capacity (MW)', 'Net Summer Capacity (MW)',
'Energy Source Code', 'Prime Mover Code', 'Retirement Month',
'Retirement Year', 'Operating Month', 'Operating Year',
'op datetime', 'ret datetime'
]
op = op.loc[:, op_cols]
ret = ret.loc[:, ret_cols]
op.columns = op.columns.str.lower()
ret.columns = ret.columns.str.lower()
op.head()
Explanation: Clean up column names and only keep desired columns
End of explanation
state_cat_path = data_path / 'Fuel categories' / 'State_facility.json'
custom_cat_path = data_path / 'Fuel categories' / 'Custom_results.json'
with open(state_cat_path) as json_data:
state_cats = json.load(json_data)
with open(custom_cat_path) as json_data:
custom_cats = json.load(json_data)
def reverse_cats(cat_file):
'Reverse a dict of lists so each item in the list is a key'
cat_map = {}
for key, vals in cat_file.items():
for val in vals:
cat_map[val] = key
return cat_map
# Aggregate EIA fuel codes to my final definitions
op['fuel'] = op.loc[:, 'energy source code'].map(reverse_cats(state_cats))
op['fuel category'] = op.loc[:, 'fuel'].map(reverse_cats(custom_cats))
ret['fuel'] = ret.loc[:, 'energy source code'].map(reverse_cats(state_cats))
ret['fuel category'] = ret.loc[:, 'fuel'].map(reverse_cats(custom_cats))
op.head()
Explanation: Read fuel category definitions and apply to the generators
End of explanation
nercs_path = data_path / 'Facility labels' / 'Facility locations_RF.csv'
facility_nerc = pd.read_csv(nercs_path, index_col=['nerc', 'year'])
facility_nerc.sort_index(inplace=True)
Explanation: Load the NERC region each power plant is in and add to dataframes
End of explanation
nerc_dict = {}
for year in facility_nerc.index.get_level_values('year').unique():
nerc_dict[year] = {}
for nerc in facility_nerc.index.get_level_values('nerc').unique():
nerc_dict[year][nerc] = facility_nerc.loc[idx[nerc, year], 'plant id'].tolist()
# Make sure there aren't lots of plants in 2016 that disappear in 2017
set(nerc_dict[2016]['MRO']) - set(nerc_dict[2017]['MRO'])
Explanation: Need to make this into a dictionary of dictionaries of lists (year, nerc, plant ids)
End of explanation
# Define iterables to loop over
years = range(2001,2018)
months = range(1,13)
fuels = list(custom_cats.keys())
# capacity_type is defined at the top of this notebook
op_df_capacity = monthly_capacity_all(op=op, ret=ret, years=years,
nerc_plant_list=nerc_dict, fuels=fuels,
cap_type=capacity_type, n_jobs=-1,
print_year=False)
op_df_capacity.tail()
# Write data to file
out_path = data_path / 'Derived data' / 'Plant capacity' / 'monthly capacity by fuel.csv'
op_df_capacity.to_csv(out_path)
Explanation: Determine operable capacity in every month
End of explanation
# Define iterables to loop over
years = range(2001,2018)
months = range(1,13)
op_ng_type = monthly_ng_type_all(op=op, ret=ret, years=years,
nerc_plant_list=nerc_dict, fuels=fuels,
cap_type=capacity_type, n_jobs=-1,
print_year=False)
op_ng_type.head()
out_path = data_path / 'Derived data' / 'Plant capacity' / 'monthly natural gas split.csv'
op_ng_type.to_csv(out_path)
Explanation: Determine natural gas capacity by prime mover type in each month
End of explanation |
15,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Installation
Step2: Import
Step3: Run | Python Code:
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google-research/google-research/blob/master/jax_dft/examples/solve_non_interacting_system.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# For GPU runtime
!pip install --upgrade jax jaxlib==0.1.62+cuda110 -f https://storage.googleapis.com/jax-releases/jax_releases.html
# Install jax-dft
!git clone https://github.com/google-research/google-research.git
!pip install google-research/jax_dft
Explanation: Installation
End of explanation
import jax
from jax.config import config
from jax_dft import scf
from jax_dft import utils
import matplotlib.pyplot as plt
import numpy as np
# Set the default dtype as float64
config.update('jax_enable_x64', True)
print(f'JAX devices: {jax.devices()}')
Explanation: Import
End of explanation
num_electrons = 2 # @param{'type': 'integer'}
grids = np.arange(-256, 257) * 0.08
external_potential = utils.get_atomic_chain_potential(
grids=grids,
locations=np.array([-0.8, 0.8]),
nuclear_charges=np.array([1., 1.]),
interaction_fn=utils.exponential_coulomb)
density, total_eigen_energies, _ = scf.solve_noninteracting_system(
external_potential, num_electrons=num_electrons, grids=grids)
print(f'total energy: {total_eigen_energies}')
plt.plot(grids, density, label='density')
plt.plot(grids, external_potential, label='potential')
plt.legend(loc=0)
plt.show()
Explanation: Run
End of explanation |
15,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 7
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x/255;
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
one_hot_map = np.eye(10)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return one_hot_map[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
print( len(image_shape) )
x = tf.placeholder(tf.float32,(None,)+image_shape, name="x")
return x
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
y = tf.placeholder(tf.float32,[None,n_classes], name="y")
return y
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# find # of input channels and create weight tensor
channels = x_tensor.get_shape().as_list()[3]
weight_dimension = conv_ksize + (channels,) + (conv_num_outputs,)
weight = tf.Variable( tf.truncated_normal( weight_dimension, mean=0.0, stddev=0.1 ) )
# conv layer
bias = tf.Variable(tf.zeros(conv_num_outputs))
conv_layer = tf.nn.conv2d(x_tensor, weight, (1,) + conv_strides + (1,), padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
# max pooling
conv_layer = tf.nn.max_pool( conv_layer, (1,) + pool_ksize + (1,), (1,) + pool_strides + (1,), padding='SAME')
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(
inputs=x_tensor,
num_outputs=num_outputs,
activation_fn=tf.nn.relu,
biases_initializer=tf.zeros_initializer,
weights_initializer=lambda size, dtype, partition_info: tf.truncated_normal(shape=size,dtype=dtype,mean=0.0,stddev=0.1)
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(
inputs=x_tensor,
num_outputs=num_outputs,
weights_initializer=lambda size, dtype, partition_info: tf.truncated_normal(shape=size,dtype=dtype,mean=0.0,stddev=0.1)
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(x, 16, (4,4), (1,1), (2,2), (1,1))
x = conv2d_maxpool(x, 32, (4,4), (1,1), (2,2), (1,1))
x = conv2d_maxpool(x, 64, (4,4), (1,1), (2,2), (1,1))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 512)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x, 256)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x, 64)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
logits = output(x,10)
# TODO: return output
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run( optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
cost = session.run( cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0
})
validation = session.run( accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.0
})
print( "cost: {}, accuracy: {}".format(cost, validation))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 20
batch_size = 128
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
15,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
So my the code for my solution can be found in
Step1: The above bit of boiler-plate code is useful in a number of situations. Indeed, this is a pattern I regularly find myself using when writing scripts.
The if-statement asks if the file has been opened 'directly'. If it has, we call main. This means that if I call the above bit of code from the command line like so
Step2: As for the main function itself, this ought to be at least somewhat easy to follow. We have a debug switch that allows us to determine where bombs are placed. We use this for testing.
The only other new things is that we have a class object called PlayGame. In this guide I've only briefly touched on classes, and so I shall not go into super detail about how classes work. But the short version is that the game object stores information about the game in progress, and the game continues all the while game.is_playing is set to True.
The input function waits for the user to give us information. This is how we can make a move.
Step3: Most of the code in the PlayGame function is concerned with parsing information from the user. When writing code, sometimes you have to make trade-offs, you can make things faster as the cost of memory, for example.
In this particular case I've made the code quite flexible and concise, at the cost of complexity.
Step4: So the above code snippet is a smaller, simpler version of the code you will find in my mindsweeper implementation. Here's the problem
Step5: So this code would more or less do the same job. And although its easier to understand it does have the drawback that for every additional command we add we need to add several lines of code. Meanwhile, the solution I went for does away with all those nested if statements. In fact, adding an extra command requires just a single line of code (which we add to the COMMANDS dictionary).
The 'cleverness' of my implementation can be seen in these two lines
Step6: By the way, in python it is possible to save a function as a variable (with the idea of calling it later), which the code below below hopefully illustrates
Step8: Combining all of these things means it is possible to write an argument parser in just a few lines of code. Moreover, adding new commands requires very little effort.
another reason why my implementation is more powerful than if statements is that I can, with minor modifications use it in other projects. For example | Python Code:
## Assume that this code exists in a file named example.py
def main():
print(1 + 1)
if __name__ == "__main__":
main()
Explanation: So my the code for my solution can be found in:
../misc/minesweeper.py
In this lecture I shall be going through some bits of code and explaining parts of it. I encourage you to read it before going any further with this lecture.
The first thing you may notice is that this file in actually a jupyter notebook. There are a few reasons why I ended up making this a normal .py file, but the main was that as projects get bigger the harder it is to use jupyter notebook. Moreover, notebooks have limited debugging capabilities, so by having a file that I could open with a code editor allowed me to spot my mistakes a bit faster.
Anyway, the code is more or less split up into three main parts:
The game logic (i.e. code for creating boards, revealing squares, etc)
Code to understand user input (that is, the code that can map an instruction such as 'flag 0 0' to a function call.
Code to that starts the game and runs the game-loop.
Part 1 should be straight-forward, its just the combination of all the mini-projects we have done with a few tweeks modifications here and there to fix bugs and general improvements to code quality (such as improved variable naming).
Part 2 is a bit of a beast, so I'll leave that till last.
This leaves Part 3 to talk about.
if __name__ == "__main__"
End of explanation
def main():
DEBUG = False #True
if DEBUG:
random.seed(243)
print("+--------------------------------+")
print("| WELCOME TO MINSWEEPER 1.0! |")
print("+--------------------------------+")
print("How to play: type 'commands' for a list of valid inputs. Then type 'help x' for information about how to use command 'x'")
print("")
game = PlayGame()
while game.is_playing:
s = input("Command: ")
game.parse_command(s)
display_board(game.player_board)
print("\n")
Explanation: The above bit of boiler-plate code is useful in a number of situations. Indeed, this is a pattern I regularly find myself using when writing scripts.
The if-statement asks if the file has been opened 'directly'. If it has, we call main. This means that if I call the above bit of code from the command line like so:
> python example.py
then the code works and it prints 2 to the console.
However if I create a new python file and try to import the module like so:
import example
Nothing happens. This is because when you import a file, its __name__ is not equal to __main__.
We can however have our cake and eat it too; we can make the code work in both cases by calling main when importing the file. for example:
import example
example.main()
So by using this pattern I can make it possible to import minesweeper.py into a jupyter notebook and it also works when you call minesweeper.py directly from the command line.
Main Function
End of explanation
number = input("give me a number: ")
print("your selected number is: ", number)
Explanation: As for the main function itself, this ought to be at least somewhat easy to follow. We have a debug switch that allows us to determine where bombs are placed. We use this for testing.
The only other new things is that we have a class object called PlayGame. In this guide I've only briefly touched on classes, and so I shall not go into super detail about how classes work. But the short version is that the game object stores information about the game in progress, and the game continues all the while game.is_playing is set to True.
The input function waits for the user to give us information. This is how we can make a move.
End of explanation
def flag(x, y):
print(f"flag function was called. x = {x}, y = {y}")
def _help(topic=None):
if topic:
print(COMMANDS[topic][1])
def cheat():
print("cheating!")
## Command -> (function, help text)
COMMANDS = {
"flag": (flag, "Flags/deflags square(x,y). Example useage: flag x y"),
"help": (_help, "Selects square(x, y) to reveal, its game over if you reveal a bomb. Example useage: pick x y"),
"cheat": (cheat, "Shows the location of all bombs. Example useage: cheat") }
def parse_command(command):
instruction, *arguments = command.split(" ")
if instruction in COMMANDS:
return COMMANDS[instruction][0](*arguments)
else:
print("Parsing instruction failed")
# Example Calls:
command = "help cheat"
parse_command(command)
command2 = "flag 0 7"
parse_command(command2)
command3 = "cheat"
parse_command(command3)
Explanation: Most of the code in the PlayGame function is concerned with parsing information from the user. When writing code, sometimes you have to make trade-offs, you can make things faster as the cost of memory, for example.
In this particular case I've made the code quite flexible and concise, at the cost of complexity.
End of explanation
def parse_command_if_version(command):
c = command.split(" ")
instruction = c[0]
args = c[1:]
if instruction == "help":
if len(args) == 0:
return _help()
if len(args) == 1:
topic = args[0]
return _help(topic)
if instruction == "cheat":
return cheat()
if instruction == "flag":
x = args[0]
y = args[1]
return flag(x, y)
# Example Calls:
command = "help cheat"
parse_command_if_version(command)
command2 = "flag 0 7"
parse_command_if_version(command2)
command3 = "cheat"
parse_command_if_version(command3)
Explanation: So the above code snippet is a smaller, simpler version of the code you will find in my mindsweeper implementation. Here's the problem: what want to support multiple commands, each of which have their own arguments. Some need several arguments from the user to work, others need zero arguments from the user. How can we handle multiple cases?
Well, one possible way to do it would be to use multiple if statements like this:
End of explanation
def add(a, b):
return a + b
nums = [1, 2]
# add(nums) # this would fail
print(add(nums[0], nums[1]))
print(add(*nums))
Explanation: So this code would more or less do the same job. And although its easier to understand it does have the drawback that for every additional command we add we need to add several lines of code. Meanwhile, the solution I went for does away with all those nested if statements. In fact, adding an extra command requires just a single line of code (which we add to the COMMANDS dictionary).
The 'cleverness' of my implementation can be seen in these two lines:
instruction, *arguments = command.split(" ")
COMMANDS[instruction][0](*arguments)
The first line is more or less equivlent to the following:
instruction = command[0]
arguments = command[1:]
The second line can only be understood with reference to the COMMANDS dictionary. Basically every command has a tuple with two elements. The first element (index 0) is the function we want to call (index 1 is the 'help text'). Meanwhile *arguments will take a list of items and pass them to the function individually.
End of explanation
def example(number):
return number
m = example # example is NOT called. m is merely a reference to a function.
n = example(20) # n calls example with the argument 20. The result is a number
print(n)
print(m(20)) # m(20) is the same as example(20)
Explanation: By the way, in python it is possible to save a function as a variable (with the idea of calling it later), which the code below below hopefully illustrates:
End of explanation
def parse_command(command, command_dictionary):
command: str
command_dictionary: dict where the key a command and the value is a function reference
instruction, *arguments = command.split(" ")
if instruction in command_dictionary:
return command_dictionary[instruction](*arguments)
else:
return f"ERROR: '{instruction}' is not a valid command"
math_dict = { "sqrt": lambda x: int(x)**0.5,
"round": lambda x, precision: round(float(x), int(precision)),
"neg": lambda x: -float(x) }
string_dict = { "toCaps": str.upper,
"reverse": lambda x: x[::-1],
"join": lambda *x: "".join(list(x))}
print("STRING_DICT EXAMPLES...")
print(parse_command("toCaps hello", string_dict))
print(parse_command("reverse dlrow", string_dict))
print(parse_command("join h e l l o _ w o r l d", string_dict))
print()
print("MATH_DICT EXAMPLES...")
print(parse_command("sqrt 2", math_dict))
print(parse_command("round 10.98 1", math_dict))
print(parse_command("neg -2", math_dict))
print(parse_command("missing a b c", math_dict))
Explanation: Combining all of these things means it is possible to write an argument parser in just a few lines of code. Moreover, adding new commands requires very little effort.
another reason why my implementation is more powerful than if statements is that I can, with minor modifications use it in other projects. For example:
End of explanation |