Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
14,400 | <ASSISTANT_TASK:>
Python Code:
import iris
import iris.quickplot as qplt
temperature = iris.load_cube('air.sig995.2012.nc')
qplt.contourf(temperature[0,:,:])
gca().coastlines()
print temperature
zonal_mean = temperature.collapsed('latitude', iris.analysis.MEAN)
qplt.contourf(zonal_mean)
#Code is a bit more complicated in order to fix issue with dates formating
fig = figure()
qplt.plot(temperature[:,10,10])
fig.autofmt_xdate()
qplt.plot(temperature[0,:,10])
import cartopy.crs as ccrs
ax = plt.axes(projection=ccrs.PlateCarree())
qplt.contourf(temperature[0,:,:])
gca().coastlines()
ax = plt.axes(projection=ccrs.Mollweide())
qplt.contourf(temperature[0,:,:])
gca().coastlines()
ax = plt.axes(projection=ccrs.Robinson())
qplt.contourf(temperature[0,:,:])
gca().coastlines()
fig = figure(figsize=(7,7))
ax = plt.axes(projection=ccrs.NorthPolarStereo())
ax.set_extent([0, 360, 50, 90], crs=ccrs.PlateCarree())
qplt.contourf(temperature[0,:,:])
gca().coastlines()
!wget https://raw.github.com/ocefpaf/ocefpaf.github.io/master/downloads/notebooks/data/challenger_path.csv
kw = dict(color='#FF9900', linestyle='-', linewidth=1.5)
lon, lat = np.loadtxt('./challenger_path.csv', delimiter=',', unpack=True)
from mpl_toolkits.basemap import Basemap
def make_basemap(projection='robin', figsize=(10, 5), resolution='c'):
fig, ax = plt.subplots(figsize=figsize)
m = Basemap(projection=projection, resolution=resolution,
lon_0=0, ax=ax)
m.drawcoastlines()
m.fillcontinents(color='0.85')
parallels = np.arange(-60, 90, 30.)
meridians = np.arange(-360, 360, 60.)
m.drawparallels(parallels, labels=[1, 0, 0, 0])
m.drawmeridians(meridians, labels=[0, 0, 1, 0])
return fig, m
fig, m = make_basemap()
_ = m.plot(*m(lon, lat), **kw)
import cartopy.crs as ccrs
import cartopy.feature as cfeature
def make_cartopy(projection=ccrs.Robinson(), figsize=(10, 5), resolution='110m'):
fig, ax = plt.subplots(figsize=figsize, subplot_kw=dict(projection=projection))
ax.set_global()
ax.coastlines(resolution=resolution, color='k')
gl = ax.gridlines(draw_labels=False) # Only PlateCarree and Mercator plots are currently supported.
ax.add_feature(cfeature.LAND, facecolor='0.75')
return fig, ax
fig, ax = make_cartopy(projection=ccrs.Robinson(), resolution='110m')
_ = ax.plot(lon, lat, transform=ccrs.Geodetic(), **kw)
from pydap.client import open_url
dataset = open_url("http://icdc.zmaw.de/thredds/dodsC/amsre_asi_nh_2011")
print dataset
ice = dataset['icecon']
ice.shape
ice.attributes
imshow(squeeze(ice[0,:,:])*0.10000000149011612)
colorbar()
%%writefile FIB1.F
C FILE: FIB1.F
SUBROUTINE FIB(A,N)
C
C CALCULATE FIRST N FIBONACCI NUMBERS
C
INTEGER N
REAL*8 A(N)
DO I=1,N
IF (I.EQ.1) THEN
A(I) = 0.0D0
ELSEIF (I.EQ.2) THEN
A(I) = 1.0D0
ELSE
A(I) = A(I-1) + A(I-2)
ENDIF
ENDDO
END
C END FILE FIB1.F
!f2py -c -m --fcompiler=gnu95 fib1 FIB1.F
import fib1
print fib1.__doc__
print fib1.fib.__doc__
import numpy as np
a=np.zeros(15,'d')
fib1.fib(a)
print a
!wget ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis/surface/air.sig995.2011.nc
from netCDF4 import MFDataset
f = MFDataset('air.sig995.????.nc')
f.variables
air = f.variables['air']
time = f.variables['time']
lat = f.variables['lat']
air.shape
from netCDF4 import num2date
time.units
time_conv = num2date(time, time.units)
time_conv
fig = figure()
plot(time_conv, air[:,10,10])
fig.autofmt_xdate()
contourf(lat[:],time_conv,air[:,:,10])
colorbar()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is how iris cube look like
Step2: We can perform different operations on cubes. For example create zonal mean
Step3: Here we plot timesiries from one point
Step4: Section along longitude
Step5: Cartopy
Step6: We only change projection
Step7: One of the things it was originally created for is to handle datelines properly. Below is an example form the post by Filipe Fernandes, that demonstrate this feature. He show how to plot Challenger Expedition track in Basemap and Cartopy.
Step8: Basemap version
Step9: Cartopy version
Step10: Pydap
Step11: We are going to access sea ice data from CliSAP-Integrated Climate Data Center (ICDC)
Step12: F2PY
Step13: Compile it with f2py
Step14: Import resulting fib1.so as python library
Step15: Read some auto generated documentation
Step16: Use fib function
Step17: netCDF4-python
Step18: The air variable now have 2 years of data
Step19: It also have very nice functions for dates processing
Step20: We can convert our time values to the datetime format, that python can work with
Step21: Note that we don't have to apply scale factor and add offset, it's done automatically.
|
14,401 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import io
import os
import re
import shutil
import string
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers import TextVectorization
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
# Embed a 1,000 word vocabulary into 5 dimensions.
embedding_layer = tf.keras.layers.Embedding(1000, 5)
result = embedding_layer(tf.constant([1, 2, 3]))
result.numpy()
result = embedding_layer(tf.constant([[0, 1, 2], [3, 4, 5]]))
result.shape
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
embedding_dim=16
model = Sequential([
vectorize_layer,
Embedding(vocab_size, embedding_dim, name="embedding"),
GlobalAveragePooling1D(),
Dense(16, activation='relu'),
Dense(1)
])
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=15,
callbacks=[tensorboard_callback])
model.summary()
#docs_infra: no_execute
%load_ext tensorboard
%tensorboard --logdir logs
weights = model.get_layer('embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0:
continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception:
pass
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Word embeddings
Step2: Download the IMDb Dataset
Step3: Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model.
Step4: The train directory also has additional folders which should be removed before creating training dataset.
Step5: Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial.
Step6: Take a look at a few movie reviews and their labels (1
Step7: Configure the dataset for performance
Step8: Using the Embedding layer
Step9: When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).
Step10: For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15).
Step11: When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step.
Step12: Create a classification model
Step13: Compile and train the model
Step14: Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.
Step15: With this approach the model reaches a validation accuracy of around 78% (note that the model is overfitting since training accuracy is higher).
Step16: Visualize the model metrics in TensorBoard.
Step17: Retrieve the trained word embeddings and save them to disk
Step18: Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format
Step19: If you are running this tutorial in Colaboratory, you can use the following snippet to download these files to your local machine (or use the file browser, View -> Table of contents -> File browser).
|
14,402 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from activation_clustering import ac_model, utils
# The same dataset preprocessing as used in the baseline cifar10 model training.
def input_fn(batch_size, ds, label_key='label'):
dataset = ds.batch(batch_size, drop_remainder=True).prefetch(tf.data.experimental.AUTOTUNE)
def interface(batch):
features = tf.cast(batch['image'], tf.float32) / 255
labels = batch[label_key]
return features, labels
return dataset.map(interface)
model = tf.keras.models.load_model('model.h5')
clustering_config = [
('activation', {'n_clusters': 15}),
('activation_18', {'n_clusters': 15}),
('activation_36', {'n_clusters': 15}),
('activation_54', {'n_clusters': 15})
]
# Uncomment this for shorter training time for debugging/test runs.
# clustering_config = [
# ('activation', {'n_clusters': 10}),
# ('activation_54', {'n_clusters': 10, 'filters': [16, 16, 16, 8]})
# ]
work_dir = 'new_work_dir'
new_acm = ac_model.ACModel(model, clustering_config, work_dir=work_dir)
new_acm.build_clustering_models()
new_acm.clustering_models
train_ds = tfds.load(
'cifar10:3.*.*',
shuffle_files=False,
split='train'
)
test_ds = tfds.load(
'cifar10:3.*.*',
shuffle_files=False,
split='test'
)
# # Uncommend this to use just a portion of data in this example for shorter training time.
# train_ds = tfds.load(
# 'cifar10:3.*.*',
# shuffle_files=False,
# split='train[:10%]'
# )
# test_ds = tfds.load(
# 'cifar10:3.*.*',
# shuffle_files=False,
# split='test[:10%]'
# )
# Cache the activations to make it easier to iterate.
batch_size = 500
ds = input_fn(batch_size, train_ds)
new_acm.cache_activations(ds, tag='train')
del ds
ds = input_fn(batch_size, test_ds)
new_acm.cache_activations(ds, tag='test')
del ds
activations_dict = new_acm.load_activations_dict(
activations_filename=work_dir+'/activations/activations_train.npz')
test_activations_dict = new_acm.load_activations_dict(
activations_filename=work_dir+'/activations/activations_test.npz')
for k, v in activations_dict.items():
print(k, v.shape)
# Here we use a small number of epochs/iterations for shorter training time.
# The activation clustering training loop handles model saving in its `work_dir`.
epochs = 15
maxiter = 980
# # Uncomment this for shorter training time
# epochs = 2
# maxiter = 280
new_acm.fit(activations_dict=activations_dict, epochs=epochs, maxiter=maxiter)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Train an activation clustering model from a baseline model
Step2: Activation clustering model's configurations. The first entry in each pair is a layer name of the baseline model, whose output activations will be clustered. The second entry is a dict with key n_clusters specifying the number of clusters.
Step3: Calling build_clustering_models creates clustering models, one for each specified activation.
|
14,403 | <ASSISTANT_TASK:>
Python Code:
# See Anaconda installed packages
!conda list
# List environments
!conda info -e
# Create Python 3 environment
!conda create -n py3k python=3 anaconda
# Activate Python 3 environment
!source activate py3k
# Deactivate Python 3 environment
!source deactivate
# Update Anaconda
!conda update conda
# Update a package with Anaconda
!conda update ipython
# Update a package
!conda update scipy
# Update all packages
!conda update all
# Install specific version of a package
!conda install scipy=0.12.0
# Cleanup: Conda can accumulate a lot of disk space
# because it doesn’t remove old unused packages
!conda clean -p
# Cleanup tarballs which are kept for caching purposes
!conda clean -t
# Start IPython Notebook
ipython notebook
# Start IPython Notebook with built-in mode to work cleanly
# with matplotlib figures
ipython notebook --pylab inline
# Start IPython Notebook with a profile
ipython notebook --profile=dark-bg
# Load the contents of a file
%load dir/file.py
# Time execution of a Python statement or expression
%timeit
%%time
# Activate the interactive debugger
%debug
# Write the contents of the cell to a file
%writefile
# Run a cell via a shell command
%%script
# Run cells with bash in a subprocess
# This is a shortcut for %%script bash
%%bash
# Run cells with python2 in a subprocess
%%python2
# Run cells with python3 in a subprocess
%%python3
# Convert a notebook to a basic HTML file
!ipython nbconvert --to html --template basic file.ipynb
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
# Configure git
!git config --global user.name 'First Last'
!git config --global user.email 'name@domain.com'
!git init
# View status and log
!git status
!git log
# Add or remove from staging area
!git add [target]
!git reset [target file or commit]
!git reset --hard origin/master
# Automatically stage tracked files,
# including deleting the previously tracked files
# Does not add untracked files
!git add -u
# Delete files and stage them
!git rm [target]
# Commit
!git commit -m “Add commit message here”
# Add new origin
!git remote add origin https://github.com/donnemartin/ipython-data-notebooks.git
# Set to new origin
!git remote set-url origin https://github.com/donnemartin/pydatasnippets.git
# Push to master, -u saves config so you can just do "git push" afterwards
!git push -u origin master
!git push
# Diff files
!git diff HEAD
!git diff --staged
!git diff --cached
# Show log message of commit and diff
!git show $COMMIT
# Undo a file that has not been added
!git checkout — [target]
# Revert a commit
!git revert
# Undo a push and leave local repo intact
!git push -f origin HEAD^:master
# Undo commit but leave files and index
!git reset --soft HEAD~1
# Amend commit message of most recent change
!git commit --amend
!git push --force [branch]
# Take the dirty state of your working directory
# and save it on a stack of unfinished changes
!git stash
# Get list of stashes
!git stash list
# Apply the top stash, re-modifying the
# uncommitted files when the stash was saved
!git stash apply
# Apply a stash at the specified index
!git stash apply stash@{1}
# Create a branch
!git branch [branch]
# Check branches
!git branch
# Switch branches
!git checkout [branch]
# Merge branch to master
!git merge [branch]
# Delete branch
!git branch -d [branch]
# Clone
!git clone git@github.com:repo folder-name
!git clone https://donnemartin@bitbucket.org/donnemartin/tutorial.git
# Update a local repository with changes from a remote repository
# (pull down from master)
!git pull origin master
# Configuring a remote for a fork
!git remote add upstream [target]
# Set remote upstream
git branch --set-upstream-to origin/branch
# Check remotes
!git remote -v
# Syncing a fork
!git fetch upstream
!git checkout master
!git merge upstream/master
# Create a file containing a patch
# git format-patch are like normal patch files, but they also carry information
# about the git commit that created the patch: the author, the date, and the
# commit log message are all there at the top of the patch.
!git format-patch origin/master
# Clean up .git folder:
!git repack -a -d --depth=250 --window=250
# GitHub tutorial:
http://try.github.io/levels/1/challenges/9
# BitBucket Setup
!cd /path/to/my/repo
!git init
!git remote add origin https://donnemartin@bitbucket.org/donnemartin/repo.git
!git push -u origin --all # pushes up the repo and its refs for the first time
!git push -u origin --tags # pushes up any tags
# Open Hatch missions
!git clone https://openhatch.org/git-mission-data/git/dmartin git_missions
# Update Ruby
!rvm get stable
# Reload Ruby (or open a new terminal)
!rvm reload
# List all known RVM installable rubies
!rvm list known
# List all installed Ruby versions
!rvm list
# Install a specific Ruby version
!rvm install 2.1.5
# Set Ruby version
!rvm --default ruby-1.8.7
!rvm --default ruby-2.1.5
# Check Ruby version
!ruby -v
!rvm --default ruby-2.1.5
# => The current folder will be generated into ./_site
!bundle exec jekyll build
# => A development server will run at http://localhost:4000/
# Auto-regeneration: enabled. Use `--no-watch` to disable.
!bundle exec jekyll serve
# Install
!pip install pelican
!pip install markdown
!pip install ghp-import
# Quick retup
!pelican-quickstart
# Run server
!make devserver
# Stop server
!make stopserver
# Run ghp-import on output folder
# Review https://pypi.python.org/pypi/ghp-import
# There's a "Big Fat Warning" section
!ghp-import output
# Update gh-pages (if using a project page)
!git push origin gh-pages
# Update gh-pages (if using a user or org page)
!git merge gh-pages master
# Check version of Django
!python -c "import django; print(django.get_version())"
# Create and setup a project
!django-admin startproject mysite
# Sync db
!python manage.py syncdb
# The migrate command looks at the INSTALLED_APPS setting and
# creates any necessary database tables according to the database
# settings in your mysite/settings.py file and the database
# migrations shipped with the app
!python manage.py migrate
# Run the dev server
!python manage.py runserver
1python manage.py runserver 8080
!python manage.py runserver 0.0.0.0:8000
# Create app
!python manage.py startapp [app_label]
# Run tests
python manage.py test [app_label]
# Tell Django that you’ve made some changes to your models
# and that you’d like the changes to be stored as a migration.
!python manage.py makemigrations [app_label]
# Take migration names and returns their SQL
!python manage.py sqlmigrate [app_label] [migration_number]
# Checks for any problems in your project without making
# migrations or touching the database.
!python manage.py check
# Create a user who can login to the admin site
!python manage.py createsuperuser
# Locate Django source files
!python -c "
import sys
sys.path = sys.path[1:]
import django
print(django.__path__)"
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2 id="ipython-notebook">IPython Notebook</h2>
Step2: | Command | Description |
Step3: <h2 id="git">Git</h2>
Step4: <h2 id="ruby">Ruby</h2>
Step5: <h2 id="jekyll">Jekyll</h2>
Step6: Build and run the localy Jekyll server
Step7: <h2 id="pelican">Pelican</h2>
Step8: Django
|
14,404 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x/255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(np.array([i for i in range(10)]))
print(lb.classes_)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
filter_weights = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[0], x_tensor.get_shape().as_list()[3], conv_num_outputs), mean=0.0, stddev=0.05))
filter_bias = tf.Variable(tf.zeros(conv_num_outputs))
strides = [1, conv_strides[0], conv_strides[1], 1]
padding = 'SAME'
conv = tf.nn.conv2d(x_tensor, filter_weights, strides, padding)
conv = tf.nn.bias_add(conv, filter_bias)
conv = tf.nn.relu(conv)
max_pool_ksize = [1, pool_ksize[0], pool_ksize[1], 1]
max_pool_strides = [1, pool_strides[0], pool_strides[1], 1]
conv = tf.nn.max_pool(conv, max_pool_ksize, max_pool_strides, padding)
return conv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
num_features = x_tensor.get_shape()[1:4].num_elements()
flattened = tf.reshape(x_tensor, [-1, num_features])
return flattened
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
shape_list = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal((shape_list[1], num_outputs), mean=0.0, stddev=0.05))
bias = tf.Variable(tf.zeros(num_outputs))
fully_connected = tf.matmul(x_tensor, weights) + bias
# should I be applying an activation function here or no? it passes either way...
return tf.nn.relu(fully_connected)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
shape_list = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal((shape_list[1], num_outputs), mean=0.0, stddev=0.05))
bias = tf.Variable(tf.zeros(num_outputs))
fully_connected = tf.matmul(x_tensor, weights) + bias
# return without the activation here
return fully_connected
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(x, 16, (4, 4), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, 0.8)
x = conv2d_maxpool(x, 24, (4, 4), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, 0.8)
x = conv2d_maxpool(x, 32, (4, 4), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, 0.8)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 128)
x = tf.nn.dropout(x, 0.8)
x = fully_conn(x, 64)
x = tf.nn.dropout(x, 0.8)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
return output(x, 10)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x:feature_batch, y:label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
cost_stat = session.run(cost, {x:valid_features, y:valid_labels, keep_prob: 1.0})
accuracy_stat = session.run(accuracy, {x:valid_features, y:valid_labels, keep_prob: 1.0})
print("Loss: {0:>6}, Accuracy: {1:>6.1%}".format(cost_stat, accuracy_stat))
# TODO: Tune Parameters
epochs = 50
batch_size = 256
keep_probability = 0.6
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
14,405 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
scalarfield=(2/L*np.sin(nx*np.pi*x/L)*np.sin(ny*np.pi*y/L))
well=scalarfield
return well
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
x=np.linspace(0,1,100)
y=np.linspace(0,1,100)
psi1=well2d(x,y,3,2,1.0)
psi2=well2d(x,y,3,2,1.0)
X,Y=np.meshgrid(psi1,psi2) #help from Making Contour Plots wiht Python on http://bulldog2.redlands.edu/facultyfolder/deweerd/tutorials/Tutorial-ContourPlot.pdf
plt.contour(Y)
assert True # use this cell for grading the contour plot
# YOUR CODE HERE
plt.pcolor(Y) #worked with Jack Porter
assert True # use this cell for grading the pcolor plot
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Contour plots of 2d wavefunctions
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions
|
14,406 | <ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
from modsim import System
def make_system(T_init, volume, r, t_end):
return System(T_init=T_init,
T_final=T_init,
volume=volume,
r=r,
t_end=t_end,
T_env=22,
t_0=0,
dt=1)
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
def change_func(T, t, system):
r, T_env, dt = system.r, system.T_env, system.dt
return -r * (T - T_env) * dt
change_func(coffee.T_init, 0, coffee)
from modsim import linrange
from modsim import TimeSeries
def run_simulation(system, change_func):
t_array = linrange(system.t_0, system.t_end, system.dt)
n = len(t_array)
series = TimeSeries(index=t_array)
series.iloc[0] = system.T_init
for i in range(n-1):
t = t_array[i]
T = series.iloc[i]
series.iloc[i+1] = T + change_func(T, t, system)
system.t_end = t_array[-1]
system.T_final = series.iloc[-1]
return series
results = run_simulation(coffee, change_func)
results.head()
from modsim import decorate
results.plot(label='coffee')
decorate(xlabel='Time (minute)',
ylabel='Temperature (C)',
title='Coffee Cooling')
coffee.T_final
def func(x):
return (x-1) * (x-2) * (x-3)
from scipy.optimize import root_scalar
res = root_scalar(func, bracket=[1.5, 2.5])
res
res.root
res = root_scalar(func, bracket=[2.5, 3.5])
res.root
def error_func(r, system):
system.r = r
results = run_simulation(system, change_func)
return system.T_final - 70
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
error_func(0.01, coffee)
error_func(0.02, coffee)
res = root_scalar(error_func, coffee, bracket=[0.01, 0.02])
res
r_coffee = res.root
r_coffee
coffee.r = res.root
run_simulation(coffee, change_func)
coffee.T_final
# Solution
milk = make_system(T_init=5, t_end=15, r=0.1, volume=50)
results_milk = run_simulation(milk, change_func)
milk.T_final
# Solution
results_milk.plot(color='C1', label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
# Solution
def error_func2(r, system):
system.r = r
results = run_simulation(system, change_func)
return system.T_final - 20
# Solution
root_scalar(error_func2, milk, bracket=[0.1, 0.2])
# Solution
run_simulation(milk, change_func)
milk.T_final
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So far the systems we have studied have been physical in the sense that they exist in the world, but they have not been physics, in the sense of what physics classes are usually about. In the next few chapters, we'll do some physics, starting with thermal systems, that is, systems where the temperature of objects changes as heat transfers from one to another.
Step2: The values of T_init, volume, and t_end come from the statement of the problem
Step3: I chose the value of r arbitrarily for now; we will figure out how to estimate it soon.
Step4: We can test it with the initial temperature of the coffee, like this
Step5: With dt=1 minute, the temperature drops by about 0.7 °C/min, at least for this value of r.
Step6: This function is similar to previous versions of run_simulation.
Step7: The result is a TimeSeries with one row per time step.
Step8: Here's what the results look like.
Step9: The temperature after 30 minutes is 72.3 °C, which is a little higher than what's stated in the problem, 70 °C.
Step10: By trial and error, we could find the value of r where the final temperature is precisely 70 °C.
Step11: Now we call root_scalar like this
Step12: The first argument is the function whose roots we want. The second
Step13: If we provide a different interval, we find a different root.
Step14: If the interval doesn't contain a root, you'll get a ValueError
Step15: This is called an "error function" because it returns the
Step16: The result is an error of 2.3 °C, because the final temperature with
Step17: With r=0.02, the error is about -11°C, which means that the final temperature is too low. So we know that the correct value must be in between.
Step18: The first argument is the error function.
Step19: In this example, r_coffee turns out to be about 0.0115, in units of min$^{-1}$ (inverse minutes).
Step20: The final temperature is very close to 70 °C.
Step21: Exercise
|
14,407 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
#Review the documentation for NumPy's random module:
np.random?
#print 5 uniformly distributed numbers between 0 and 1
print(np.random.random(5))
#print another 5 - should be different
print(np.random.random(5))
#print 5 uniformly distributed integers between 1 and 10
print(np.random.randint(1,11,5))
#print another 5 - should be different
print(np.random.randint(1,11,5))
#If you want to save a random number for future use:
z=np.random.random()
print("The number is ",z)
#Rerun random
print(np.random.random())
print("The number is still",z)
np.random.seed(42)
for i in range(4):
print(np.random.random())
np.random.seed(42)
for i in range(4):
print(np.random.random())
np.random.seed(39)
for i in range(4):
print(np.random.random())
for i in range(10):
if np.random.random()<0.2:
print("Heads")
else:
print("Tails")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Random Processes in Physics
Step2: Some basic functions to point out (we'll get to others in a bit)
Step3: Notice you have to use 1-11 for the range. Why?
Step4: In Class Exercise - Rolling Dice
Step5: You might want to do this for
|
14,408 | <ASSISTANT_TASK:>
Python Code:
number_list = [1, 2, 4, 8, 16, 32]
the_pythons = ["Graham", "Terry", "Michael", "Eric", "Terry", "John"]
mixed = [1, "Terry", 4]
print (mixed)
monty = ("Graham", "Terry", "Michael", "Eric", "Terry", "John")
# the entire tuple
print (monty)
# one element at a time
for name in monty:
print(name)
# indexing
print(the_pythons[2])
print(monty[2])
# monty is the tuple, and the_pythons is a list
print (monty)
print (list(monty))
print (the_pythons)
print (tuple(the_pythons))
# Safer than constants, because it is enforced by interpreter
CONVERSION_CONSTANT = 5/9
my_dict = {}
my_dict[3.14] = "pi"
my_dict["pi"] = 3.14159
my_dict[(1,2)] = "x,y coordinates"
my_dict[(2,3)] = "x,y coordinates"
print my_dict
my_dict[(1,2)] = [4, 5, 6, 7]
print my_dict
len(my_dict)
phone_book = {"Graham":"555-111",
"Terry": "555-2222",
"Michael": "555-3333"}
phone_book
phone_book['Michael']
my_dict[(1,2)]
phone_book['Wanda']
# Using 'in'
if "Michael" in phone_book:
print phone_book["Michael"]
# Using 'not in'
if "Wanda" not in phone_book:
print("Fish don't need phone numbers")
print(phone_book)
# Eric, Terry, John
phone_book["Eric"] = "555-4444"
phone_book["Terry"] = "555-5555"
phone_book["John"] = "555-6666"
del phone_book["John"]
print(phone_book)
if 'Michael' in phone_book:
del phone_book['Michael']
print(phone_book)
if '555-4444' in phone_book:
print ("Can match values too!")
for name in phone_book:
print (name, phone_book[name])
phone_book.items()
phone_book.keys()
phone_book.values()
if '555-4444' in phone_book.values():
print("We can match values too")
even = False
if even = True:
print("It is even!")
154 >= 300 != False
def is_equal(t1, t2):
# return t1 == t2
return t1.sort() == t2.sort()
list1 = ["name", "age", "temp"]
list2 = ["name", "temp", "age"]
if is_equal(list1, list2):
print("Same!")
else:
print ("Different!")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tuple
Step2: [] brackets or square brackets
Step3: Why?
Step4: Dictionaries
Step5: Retrieving a Value from a Dictionary
Step6: Testing for Value in a Dictionary
Step7: Adding Elements to a Dictionary
Step8: Deleting Elements
Step9: Putting it together
Step10: Iterating over a Dictionary
Step11: Dictionary Methods
Step12: Midterm Discussion
|
14,409 | <ASSISTANT_TASK:>
Python Code:
strings = "stressed"
print(strings[::-1])
strings1 = u"パタトクカシーー"
print(strings1[::2])
strings_p = u"パトカー"
strings_t = u"タクシー"
strings_sum = ''
for p, t in zip(strings_p, strings_t):
strings_sum += p + t
print(strings_sum)
strings3 = "Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics."
count_list = [len(i) for i in strings3.split(' ')]
count_list
strings4 = "Hi He Lied Because Boron Could Not Oxidize Fluorine. New Nations Might Also Sign Peace Security Clause. Arthur King Can."
strings4 = strings4.replace('.', '')
strings4 = strings4.split(' ')
dictionary = {}
for i in range(len(strings4)):
if i+1 in [1, 5, 6, 7, 8, 9, 15, 16, 19]:
dictionary.update({strings4[i][:1]: strings4[i]})
else:
dictionary.update({strings4[i][:2]: strings4[i]})
print(dictionary)
def ngram(sequence, n, mode='c'):
if mode == 'c':
return [sequence[i:i+n] for i in range(len(sequence)-1)]
elif mode == 'w':
sequence = [s.strip(',.') for s in sequence.split(' ')] # スペースや記号を除去した単語リストの生成
return [tuple(sequence[i:i+n]) for i in range(len(sequence)-1)]
sequence = "I am an NLPer"
print(ngram(sequence, 2))
print(ngram(sequence, 2, 'w'))
X = set(ngram('paraparaparadise', 2))
Y = (ngram('paragraph', 2))
print(X.intersection(Y))
print(X.union(Y))
print(X.difference(Y))
print('se' in X)
print('se' in Y)
#-*- coding:utf-8 -*-
def print_template(x, y, z):
return u'%s時の%sは%s' % (x, y, z)
template = print_template(12, u'気温', 22.4)
print(template)
def cipher(sequence):
return "".join((map(str, [chr(219-ord(i)) if i.islower() else i for i in sequence])))
strings = "I am an NLPer"
encryption = cipher(strings)
decryption = cipher(encryption)
print(encryption)
print(decryption)
import random
sequence = "I couldn't believe that I could actually understand what I was reading : the phenomenal power of the human mind."
[s[0]+"".join(map(str, random.sample(s[1:-1], len(s)-2)))+s[-1] if len(s) >= 5 \
else str(s) for s in sequence.split(' ')]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 01 パタトクカシーー
Step2: 02「パトカー」+「タクシー」=「パタトクカシーー」
Step3: 03. 円周率
Step4: 04. 元素記号
Step5: 05 n-gram
Step6: 06. 集合
Step7: 07. テンプレートによる文生成
Step8: 08. 暗号文
Step9: 09. Typoglycemia
|
14,410 | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
sys.path.append('../geostatsmodels')
from geostatsmodels import utilities, variograms, model, kriging, geoplot
import matplotlib.pyplot as plt
import numpy as np
import pandas
z = utilities.readGeoEAS('../data/ZoneA.dat')
P = z[:,[0,1,3]]
pt = [2000, 4700]
plt.scatter(P[:,0], P[:,1], c=P[:,2], cmap=geoplot.YPcmap)
plt.title('Zone A Subset % Porosity')
plt.colorbar()
xmin, xmax = 0, 4250
ymin, ymax = 3200, 6250
plt.xlim(xmin,xmax)
plt.ylim(ymin,ymax)
for i in range(len(P[:,2])):
x, y, por = P[i]
if (x < xmax) & (y > ymin) & (y < ymax):
plt.text( x+100, y, '{:4.2f}'.format( por ) )
plt.scatter(pt[0], pt[1], marker='x', c='k')
plt.text(pt[0] + 100 , pt[1], '?')
plt.xlabel('Easting (m)')
plt.ylabel('Northing (m)');
tolerance = 250
lags = np.arange(tolerance, 10000, tolerance*2)
sill = np.var(P[:,2])
geoplot.semivariogram(P, lags, tolerance)
svm = model.semivariance(model.spherical, (4000, sill))
geoplot.semivariogram(P, lags, tolerance, model=svm)
covfct = model.covariance(model.spherical, (4000, sill))
kriging.simple(P, covfct, pt, N=6)
kriging.ordinary(P, covfct, pt, N=6)
est, kstd = kriging.krige(P, covfct, [[2000,4700],[2100,4700],[2000,4800],[2100,4800]], 'simple', N=6)
est
kstd
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll read the data from ZoneA.dat.
Step2: We want the first, second and fourth columns of the data set, representing the x and y spatial coordinates, and the porosity.
Step3: We'll be interested in determining the porosity at a point (2000,4700).
Step4: We can plot our region of interest as follows
Step5: We can determine the parameters for our model by looking at the semivariogram and trying to determine the appropriate range and sill.
Step6: The semivariogram plotting function, svplot(), plots sill as a dashed line, and the empirical semivariogram as determined from the data. It optionally plots a semivariance model.
Step7: We can pass a model to this function using the optional model argument and see it plotted in red.
Step8: The covariance modeling function function will return a spherical covariance model that takes a distance as input, and returns an covariance estimate. We've used the global variance of the porosity in ZoneA.dat as the sill.
Step9: We can then krige the data, using the covariance model, the point we are interested in, (2000,47000), and N=6 signifying that we only want to use the six nearest points. The output of the simple and ordinary kriging functions below is the krigin estimate, and the standard deviation of the kriging estimate.
|
14,411 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import vispy
import vispy.gloo as gloo
from vispy import app
from vispy.util.transforms import perspective, translate, rotate
# load the vispy bindings manually for the notebook which enables webGL
# %load_ext vispy
n = 100
a_position = np.random.uniform(-1, 1, (n, 3)).astype(np.float32)
a_id = np.random.randint(0, 30, (n, 1))
a_id = np.sort(a_id, axis=0).astype(np.float32)
VERT_SHADER =
uniform mat4 u_model;
uniform mat4 u_view;
uniform mat4 u_projection;
attribute vec3 a_position;
attribute float a_id;
varying float v_id;
void main (void) {
v_id = a_id;
gl_Position = u_projection * u_view * u_model * vec4(a_position,1.0);
}
FRAG_SHADER =
varying float v_id;
void main()
{
float f = fract(v_id);
// The second useless test is needed on OSX 10.8 (fuck)
if( (f > 0.0001) && (f < .9999) )
discard;
else
gl_FragColor = vec4(0,0,0,1);
}
class Canvas(app.Canvas):
# ---------------------------------
def __init__(self, size=None, show=True):
app.Canvas.__init__(self, keys='interactive', size=size)
self.program = gloo.Program(VERT_SHADER, FRAG_SHADER)
# Set uniform and attribute
self.program['a_id'] = gloo.VertexBuffer(a_id)
self.program['a_position'] = gloo.VertexBuffer(a_position)
self.translate = 5
self.view = translate((0, 0, -self.translate), dtype=np.float32)
self.model = np.eye(4, dtype=np.float32)
gloo.set_viewport(0, 0, self.physical_size[0], self.physical_size[1])
self.projection = perspective(45.0, self.size[0] /
float(self.size[1]), 1.0, 1000.0)
self.program['u_projection'] = self.projection
self.program['u_model'] = self.model
self.program['u_view'] = self.view
self.theta = 0
self.phi = 0
self.context.set_clear_color('white')
self.context.set_state('translucent')
self.timer = app.Timer('auto', connect=self.on_timer, start=True)
if show:
self.show()
# ---------------------------------
def on_key_press(self, event):
if event.text == ' ':
if self.timer.running:
self.timer.stop()
else:
self.timer.start()
# ---------------------------------
def on_timer(self, event):
self.theta += .5
self.phi += .5
self.model = np.dot(rotate(self.theta, (0, 0, 1)),
rotate(self.phi, (0, 1, 0)))
self.program['u_model'] = self.model
self.update()
# ---------------------------------
def on_resize(self, event):
gloo.set_viewport(0, 0, event.physical_size[0], event.physical_size[1])
self.projection = perspective(45.0, event.size[0] /
float(event.size[1]), 1.0, 1000.0)
self.program['u_projection'] = self.projection
# ---------------------------------
def on_mouse_wheel(self, event):
self.translate += event.delta[1]
self.translate = max(2, self.translate)
self.view = translate((0, 0, -self.translate))
self.program['u_view'] = self.view
self.update()
# ---------------------------------
def on_draw(self, event):
self.context.clear()
self.program.draw('line_strip')
c = Canvas(size=(300, 300))
# from vispy.app.backends.ipython import VispyWidget
# w = VispyWidget()
# c2 = Canvas(size=(300, 300), show=False)
# w.set_canvas(c2)
# w
c.timer.stop()
c.timer.start()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Jupyter Notebook backend demo
Step3: Every cell above was preparing our GL Canvas for operation. Now we will create the Canvas instance and because of the self.show() in our __init__ method our canvas will be shown and its timer started immediately.
Step4: We could also manually make a VispyWidget object and attach our canvas to it.
Step5: When timers are involved we can run the stop and start methods to turn them on/off and see the result in the widget displayed above.
|
14,412 | <ASSISTANT_TASK:>
Python Code:
send(IP(dst="1.2.3.4")/TCP(dport=502, options=[("MSS", 0)]))
ans = sr([IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_RR())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_Traceroute())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8))/ICMP(seq=RandShort())], verbose=False, timeout=3)[0]
ans.make_table(lambda (x, y): (", ".join(z.summary() for z in x[IP].options) or '-', x[IP].ttl, y.sprintf("%IP.src% %ICMP.type%")))
from scapy.all import *
packet = IP()/TCP()
Ether()/packet
>>> ls(IP, verbose=True)
version : BitField (4 bits) = (4)
ihl : BitField (4 bits) = (None)
tos : XByteField = (0)
len : ShortField = (None)
id : ShortField = (1)
flags : FlagsField (3 bits) = (0)
MF, DF, evil
frag : BitField (13 bits) = (0)
ttl : ByteField = (64)
proto : ByteEnumField = (0)
chksum : XShortField = (None)
src : SourceIPField (Emph) = (None)
dst : DestIPField (Emph) = (None)
options : PacketListField = ([])
p = Ether()/IP(dst="www.secdev.org")/TCP()
p.summary()
print p.dst # first layer that has an src field, here Ether
print p[IP].src # explicitly access the src field of the IP layer
# sprintf() is a useful method to display fields
print p.sprintf("%Ether.src% > %Ether.dst%\n%IP.src% > %IP.dst%")
print p.sprintf("%TCP.flags% %TCP.dport%")
[p for p in IP(ttl=(1,5))/ICMP()]
sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR()))
p[DNS].an
r, u = srp(Ether()/IP(dst="8.8.8.8", ttl=(5,10))/UDP()/DNS(rd=1, qd=DNSQR(qname="www.example.com")))
r, u
# Access the first tuple
print r[0][0].summary() # the packet sent
print r[0][1].summary() # the answer received
# Access the ICMP layer. Scapy received a time-exceeded error message
r[0][1][ICMP]
wrpcap("scapy.pcap", r)
pcap_p = rdpcap("scapy.pcap")
pcap_p[0]
s = sniff(count=2)
s
sniff(count=2, prn=lambda p: p.summary())
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # create an UDP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/UDP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNS
# Send the DNS query
ssck.sr1(DNS(rd=1, qd=DNSQR(qname="www.example.com")))
ans, unans = srloop(IP(dst=["8.8.8.8", "8.8.4.4"])/ICMP(), inter=.1, timeout=.1, count=100, verbose=False)
%matplotlib inline
ans.multiplot(lambda (x, y): (y[IP].src, (y.time, y[IP].id)), plot_xy=True)
pkt = IP() / UDP() / DNS(qd=DNSQR())
print repr(str(pkt))
print pkt.summary()
hexdump(pkt)
pkt.show()
pkt.canvas_dump()
ans, unans = traceroute('www.secdev.org', maxttl=15)
ans.world_trace()
ans = sr(IP(dst=["scanme.nmap.org", "nmap.org"])/TCP(dport=[22, 80, 443, 31337]), timeout=3, verbose=False)[0]
ans.extend(sr(IP(dst=["scanme.nmap.org", "nmap.org"])/UDP(dport=53)/DNS(qd=DNSQR()), timeout=3, verbose=False)[0])
ans.make_table(lambda (x, y): (x[IP].dst, x.sprintf('%IP.proto%/{TCP:%r,TCP.dport%}{UDP:%r,UDP.dport%}'), y.sprintf('{TCP:%TCP.flags%}{ICMP:%ICMP.type%}')))
class DNSTCP(Packet):
name = "DNS over TCP"
fields_desc = [ FieldLenField("len", None, fmt="!H", length_of="dns"),
PacketLenField("dns", 0, DNS, length_from=lambda p: p.len)]
# This method tells Scapy that the next packet must be decoded with DNSTCP
def guess_payload_class(self, payload):
return DNSTCP
# Build then decode a DNS message over TCP
DNSTCP(str(DNSTCP(dns=DNS())))
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # create an TCP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/TCP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNSTCP
# Send the DNS query
ssck.sr1(DNSTCP(dns=DNS(rd=1, qd=DNSQR(qname="www.example.com"))))
from scapy.all import *
import argparse
parser = argparse.ArgumentParser(description="A simple ping6")
parser.add_argument("ipv6_address", help="An IPv6 address")
args = parser.parse_args()
print sr1(IPv6(dst=args.ipv6_address)/ICMPv6EchoRequest(), verbose=0).summary()
# Specify the Wi-Fi monitor interface
#conf.iface = "mon0" # uncomment to test
# Create an answering machine
class ProbeRequest_am(AnsweringMachine):
function_name = "pram"
# The fake mac of the fake access point
mac = "00:11:22:33:44:55"
def is_request(self, pkt):
return Dot11ProbeReq in pkt
def make_reply(self, req):
rep = RadioTap()
# Note: depending on your Wi-Fi card, you might need a different header than RadioTap()
rep /= Dot11(addr1=req.addr2, addr2=self.mac, addr3=self.mac, ID=RandShort(), SC=RandShort())
rep /= Dot11ProbeResp(cap="ESS", timestamp=time.time())
rep /= Dot11Elt(ID="SSID",info="Scapy !")
rep /= Dot11Elt(ID="Rates",info='\x82\x84\x0b\x16\x96')
rep /= Dot11Elt(ID="DSset",info=chr(10))
OK,return rep
# Start the answering machine
#ProbeRequest_am()() # uncomment to test
from scapy.all import *
import nfqueue, socket
def scapy_cb(i, payload):
s = payload.get_data() # get and parse the packet
p = IP(s)
# Check if the packet is an ICMP Echo Request to 8.8.8.8
if p.dst == "8.8.8.8" and ICMP in p:
# Delete checksums to force Scapy to compute them
del(p[IP].chksum, p[ICMP].chksum)
# Set the ICMP sequence number to 0
p[ICMP].seq = 0
# Let the modified packet go through
ret = payload.set_verdict_modified(nfqueue.NF_ACCEPT, str(p), len(p))
else:
# Accept all packets
payload.set_verdict(nfqueue.NF_ACCEPT)
# Get an NFQUEUE handler
q = nfqueue.queue()
# Set the function that will be call on each received packet
q.set_callback(scapy_cb)
# Open the queue & start parsing packes
q.fast_open(2807, socket.AF_INET)
q.try_run()
class TCPScanner(Automaton):
@ATMT.state(initial=1)
def BEGIN(self):
pass
@ATMT.state()
def SYN(self):
print "-> SYN"
@ATMT.state()
def SYN_ACK(self):
print "<- SYN/ACK"
raise self.END()
@ATMT.state()
def RST(self):
print "<- RST"
raise self.END()
@ATMT.state()
def ERROR(self):
print "!! ERROR"
raise self.END()
@ATMT.state(final=1)
def END(self):
pass
@ATMT.condition(BEGIN)
def condition_BEGIN(self):
raise self.SYN()
@ATMT.condition(SYN)
def condition_SYN(self):
if random.randint(0, 1):
raise self.SYN_ACK()
else:
raise self.RST()
@ATMT.timeout(SYN, 1)
def timeout_SYN(self):
raise self.ERROR()
TCPScanner().run()
TCPScanner().run()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2_ Adanced firewalking using IP options is sometimes useful to perform network enumeration. Here is more complicate one-liner
Step2: Now that, we've got your attention, let's start the tutorial !
Step3: First steps
Step4: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Step5: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Step6: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Step7: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
Step8: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
Step9: Sending and receiving
Step10: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
Step11: sr() sent a list of packets, and returns two variables, here r and u, where
Step12: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
Step13: Sniffing the network is a straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
Step14: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
Step15: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Step16: Visualization
Step17: Then we can use the results to plot the IP id values.
Step18: The str() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
Step19: Since some people cannot read this representation, Scapy can
Step20: "hexdump" the packet's bytes
Step21: dump the packet, layer by layer, with the values for each field
Step22: render a pretty and handy dissection of the packet
Step23: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
Step24: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
Step25: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner"
Step26: Implementing a new protocol
Step27: This new packet definition can be direcly used to build a DNS message over TCP.
Step28: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
Step29: Scapy as a module
Step30: Answering machines
Step31: Cheap Man-in-the-middle with NFQUEUE
Step32: Automaton
|
14,413 | <ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
dataproc_submit_spark_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/submit_spark_job/component.yaml')
help(dataproc_submit_spark_job_op)
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
SPARK_FILE_URI = 'file:///usr/lib/spark/examples/jars/spark-examples.jar'
MAIN_CLASS = 'org.apache.spark.examples.SparkPi'
ARGS = ['1000']
EXPERIMENT_NAME = 'Dataproc - Submit Spark Job'
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Spark job pipeline',
description='Dataproc submit Spark job pipeline'
)
def dataproc_submit_spark_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
main_jar_file_uri = '',
main_class = MAIN_CLASS,
args = json.dumps(ARGS),
spark_job=json.dumps({ 'jarFileUris': [ SPARK_FILE_URI ] }),
job='{}',
wait_interval='30'
):
dataproc_submit_spark_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
main_jar_file_uri=main_jar_file_uri,
main_class=main_class,
args=args,
spark_job=spark_job,
job=job,
wait_interval=wait_interval)
pipeline_func = dataproc_submit_spark_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the component using KFP SDK
Step2: Sample
Step3: Example pipeline that uses the component
Step4: Compile the pipeline
Step5: Submit the pipeline for execution
|
14,414 | <ASSISTANT_TASK:>
Python Code:
li = ["this", "is", "a", "list"]
print(li)
print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive)
print(li[2:]) # Print element 2 and everything after that
print(li[:-1]) # Print everything BEFORE element -1 (the last one)
import numpy as np
x = np.array([1, 2, 3, 4, 5])
print(x)
print(x[1:3])
print(x[2:])
print(x[:-1])
python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]
print(python_matrix)
numpy_matrix = np.array(python_matrix)
print(numpy_matrix)
print(python_matrix) # The full list-of-lists
print(python_matrix[0]) # The inner-list at the 0th position of the outer-list
print(python_matrix[0][0]) # The 0th element of the 0th inner-list
print(numpy_matrix)
print(numpy_matrix[0])
print(numpy_matrix[0, 0]) # Note the comma-separated format!
x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
print(x)
print()
print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1.
video = np.empty(shape = (1920, 1080, 5000))
print("Axis 0 length: {}".format(video.shape[0])) # How many rows?
print("Axis 1 length: {}".format(video.shape[1])) # How many columns?
print("Axis 2 length: {}".format(video.shape[2])) # How many frames?
print(video.ndim)
del video
tensor = np.empty(shape = (2, 640, 480, 360, 100))
print(tensor.shape)
# Axis 0: color channel--used to differentiate between fluorescent markers
# Axis 1: height--same as before
# Axis 2: width--same as before
# Axis 3: depth--capturing 3D depth at each time interval, like a 3D movie
# Axis 4: frame--same as before
print(tensor.size)
del tensor
example = np.empty(shape = (3, 5, 9))
print(example.shape)
sliced = example[0] # Indexed the first axis.
print(sliced.shape)
sliced_again = example[0, 0] # Indexed the first and second axes.
print(sliced_again.shape)
x = np.array([1, 2, 3, 4, 5])
x += 10
print(x)
zeros = np.zeros(shape = (3, 4))
ones = 1
zeros += ones
print(zeros)
x = np.zeros(shape = (3, 3))
y = np.ones(4)
x + y
x = np.zeros(shape = (3, 4))
y = np.array([1, 2, 3, 4])
z = x + y
print(z)
x = np.random.standard_normal(size = (7, 4))
print(x)
mask = x < 0
print(mask)
x[mask] = 0
print(x)
mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5
x[mask] = 99
print(x)
matrix = np.empty(shape = (8, 4))
for i in range(8):
matrix[i] = i # Broadcasting is happening here!
print(matrix)
indices = np.array([7, 0, 5, 2])
print(matrix[indices])
matrix = np.arange(32).reshape((8, 4))
print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise.
indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays!
print(matrix[indices])
( np.array([1, 7, 4]), np.array([3, 0, 1]) )
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With NumPy arrays, all the same functionality you know and love from lists is still there.
Step2: These operations all work whether you're using Python lists or NumPy arrays.
Step3: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array method
Step4: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way
Step5: With NumPy arrays, you can use that same notation...or you can use comma-separated indices
Step6: It's not earth-shattering, but enough to warrant a heads-up.
Step7: Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3)
Step8: We know video is 3D because we can also access its ndim attribute.
Step9: Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object
Step10: We can also ask how many elements there are total, using the size attribute
Step11: These are extreme examples, but they're to illustrate how flexible NumPy arrays are.
Step12: Part 2
Step13: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because (in a word) broadcasting.
Step14: In this example, the scalar value 1 is broadcast to all the elements of zeros, converting the operation to element-wise addition.
Step15: But on some intuitive level, this hopefully makes sense
Step16: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise.
Step17: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's 7 people who are described by their height, weight, age, and 40-yard dash time. Or it's a matrix of data on 7 video games, each described by their PC Gamer rating, Steam downloads count, average number of active players, and total cheating complaints.
Step18: Now, we can use our mask to access only the indices we want to set to 0.
Step19: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind.
Step20: Fancy Indexing
Step21: We have 8 rows and 4 columns, where each row is a vector of the same value repeated across the columns, and that value is the index of the row.
Step22: Ta-daaa! Pretty spiffy!
Step23: Ok, this will take a little explaining, bear with me
|
14,415 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
x = np.arange(25)
x
x = da.arange(25, chunks=(5,))
y = x ** 2
y
y.visualize()
da.sqrt(x)[-1].visualize()
x = da.arange(250, chunks=(5,))
x.visualize()
x = da.ones((15, 15), chunks=(5,5))
x.sum(axis=1).visualize()
import dask.multiprocessing
y.compute(get = dask.multiprocessing.get)
import dask.dataframe as dd
cols = ['square_id', 'timestamp', 'country_code',
'sms_in', 'sms_out','call_in','call_out', 'internet']
dtypes = {'square_id': int, 'timestamp': int, 'countrycode': int,
'sms_in': float,'sms_out': float, 'call_in': float, 'call_out': float, 'internet': float}
df = dd.read_csv?
df = dd.read_csv
df_a = dd.read_csv('data/split/*.csv', header=0, names=cols, dtype=dtypes, sep="\t")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h1>MISSING SEPERATOR ARGS FOR SPACE DELIMITED FILE!!!</h1>
|
14,416 | <ASSISTANT_TASK:>
Python Code:
import crpropa
class ObserverPlane(crpropa.ObserverFeature):
Detects all particles after crossing the plane. Defined by position (any
point in the plane) and vectors v1 and v2.
def __init__(self, position, v1, v2):
crpropa.ObserverFeature.__init__(self)
# calculate three points of a plane
self.__v1 = v1
self.__v2 = v2
self.__x0 = position
def distanceToPlane(self, X):
Always positive for one side of plane and negative for the other side.
dX = np.asarray([X.x - self.__x0[0], X.y - self.__x0[1], X.z - self.__x0[2]])
V = np.linalg.det([self.__v1, self.__v2, dX])
return V
def checkDetection(self, candidate):
currentDistance = self.distanceToPlane(candidate.current.getPosition())
previousDistance = self.distanceToPlane(candidate.previous.getPosition())
candidate.limitNextStep(abs(currentDistance))
if np.sign(currentDistance) == np.sign(previousDistance):
return crpropa.NOTHING
else:
return crpropa.DETECTED
from crpropa import Mpc, nG, EeV
import numpy as np
turbSpectrum = crpropa.SimpleTurbulenceSpectrum(Brms=1*nG, lMin = 2*Mpc, lMax=5*Mpc, sIndex=5./3.)
gridprops = crpropa.GridProperties(crpropa.Vector3d(0), 128, 1 * Mpc)
BField = crpropa.SimpleGridTurbulence(turbSpectrum, gridprops)
m = crpropa.ModuleList()
m.add(crpropa.PropagationCK(BField, 1e-4, 0.1 * Mpc, 5 * Mpc))
m.add(crpropa.MaximumTrajectoryLength(25 * Mpc))
# Observer
out = crpropa.TextOutput("sheet.txt")
o = crpropa.Observer()
# The Observer feature has to be created outside of the class attribute
# o.add(ObserverPlane(...)) will not work for custom python modules
plo = ObserverPlane(np.asarray([0., 0, 0]) * Mpc, np.asarray([0., 1., 0.]) * Mpc, np.asarray([0., 0., 1.]) * Mpc)
o.add(plo)
o.setDeactivateOnDetection(False)
o.onDetection(out)
m.add(o)
# source setup
source = crpropa.Source()
source.add(crpropa.SourcePosition(crpropa.Vector3d(0, 0, 0) * Mpc))
source.add(crpropa.SourceIsotropicEmission())
source.add(crpropa.SourceParticleType(crpropa.nucleusId(1, 1)))
source.add(crpropa.SourceEnergy(1 * EeV))
m.run(source, 1000)
out.close()
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import pylab as plt
ax = plt.subplot(111, projection='3d')
data = plt.loadtxt('sheet.txt')
ax.scatter(data[:,5], data[:,6], data[:,7] )
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_xlim(20,-20)
ax.set_ylim(20,-20)
ax.set_zlim(20,-20)
ax.view_init(25, 95)
bins = np.linspace(-20,20, 50)
plt.hist(data[:,5], bins=bins, label='X', histtype='step')
plt.hist(data[:,6], bins=bins, label='Y', histtype='step')
plt.hist(data[:,7], bins=bins, label='Z', histtype='step')
plt.legend()
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Custom Observer
Step3: As test, we propagate some particles in a random field with a sheet observer
Step4: and plot the final position of the particles in 3D
Step5: or as a histogram. Note the width of the X distribution, which is due to the particles being detected after crossing.
|
14,417 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
!head -n 30 open_exoplanet_catalogue.txt
data=np.genfromtxt('open_exoplanet_catalogue.txt',delimiter=",")
assert data.shape==(1993,24)
plt.hist(data[:,2],range(0,16));
plt.box(False)
plt.xlabel("$M sin i (M_JUP)$");
plt.ylabel('Number of Planets');
assert True # leave for grading
f=plt.figure(figsize=(9,6))
plt.scatter(data[:,5],data[:,6])
plt.ylim(0,1.0)
plt.box(False)
plt.grid(True)
plt.semilogx(True);
plt.xlabel('Semimajor Axis (AU)');
plt.ylabel('Orbital Eccentricity');
assert True # leave for grading
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exoplanet properties
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
|
14,418 | <ASSISTANT_TASK:>
Python Code:
# disable ssl warnings
import urllib3
urllib3.disable_warnings()
keycloak_url = 'http://localhost:8080'
token_endpoint = '/auth/realms/demo/protocol/openid-connect/token'
client_id = 'demo'
client_secret = 'c083d72c-a262-40b1-ad51-326f6977d74b'
token_url = "{}{}".format(keycloak_url, token_endpoint)
token_url
import os
from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
client = BackendApplicationClient(client_id=client_id)
oauth = OAuth2Session(client=client)
token = oauth.fetch_token(
token_url,
scope='compute',
client_id=client_id,
client_secret=client_secret,
include_client_id=True,
verify=False)
token
token['access_token']
base_url = 'http://localhost:8000'
url = "{}/ows/proxy/emu?service=WPS&version=1.0.0&request=Execute&identifier=chomsky".format(base_url)
url
import requests
headers = {'Authorization': 'Bearer {}'.format(token['access_token'])}
resp = requests.get(url, headers=headers, verify=False)
resp.ok
'ProcessSucceeded' in resp.text
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keycloak client
Step2: Get OAuth access token from Keycloak
Step3: Execute WPS Process with access token
|
14,419 | <ASSISTANT_TASK:>
Python Code:
# Hit shift + enter or use the run button to run this cell and see the results
print 'hello world11_0_11'
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
print 2 + 2 # The result of this line will not be displayed
print 3 + 3 # The result of this line will be displayed, because it is the last line of the cell
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
class_name = "BRUCE Woodley Intro to Data Analysis"
message = class_name + " is awesome!"
message
import unicodecsv
with open("enrollments.csv","rb") as filein :
line = unicodecsv.DictReader(filein)
print("type(line) \t",type(line))
enrollments = list(line)
print enrollments[0]
import unicodecsv
with open("daily_engagement.csv","rb") as filein :
line = unicodecsv.DictReader(filein)
#print("type(line) \t",type(line))
daily_engagement = list(line)
print daily_engagement[0]
import unicodecsv
with open("project_submissions.csv","rb") as filein :
line = unicodecsv.DictReader(filein)
project_submissions_fieldnames = line.fieldnames
#print("type(line) \t",type(line))
print("project_submissions_fieldnames = ",str(project_submissions_fieldnames))
project_submissions = list(line)
print project_submissions[0]
# Fixing Data Types.
# Hit shift + enter or use the run button to run this cell and see the results
from datetime import datetime as dt
# Takes a date as a string, and returns a Python datetime object.
# If there is no date given, returns None
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_maybe_int(i):
if i == '':
return None
else:
return int(i)
print(" type(enrollment) " , type(enrollment))
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'
enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'
enrollment['join_date'] = parse_date(enrollment['join_date'])
enrollments[0]
# enrollments
# daily_engagement
# project_submission
# these are all a "List of Dictionaries"
import sys
import os
import string
import time
#print(type(enrollments),len(enrollments) )
enrollments_set = set()
for line in enrollments :
enrollments_set.add(line['account_key'] )
print("enrollments",type(enrollments), " row total: ",len(enrollments), " total students: ", len(enrollments_set) )
#print(type(daily_engagement), len(daily_engagement) )
daily_engagement_set = set()
for line in daily_engagement :
daily_engagement_set.add(line['acct'] )
print("daily_engagement", type(daily_engagement)," row total: ",len(daily_engagement), " total students: ", len(daily_engagement_set) )
#print(type(project_submissions), len(project_submissions) )
project_submissions_set = set()
for line in project_submissions :
project_submissions_set.add(line['account_key'] )
print("project_submissions", type(project_submissions)," row total: ",len(project_submissions), " total students: ", len(project_submissions_set) )
print(" ")
print('REM: these are all a "List of Dictionaries"...!')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nicely formatted results
Step2: Creating cells
Step3: Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.
Step4: Fixing Data Types.
|
14,420 | <ASSISTANT_TASK:>
Python Code:
import lasio
import datetime
import numpy
import os
import matplotlib.pyplot as plt
%matplotlib inline
depths = numpy.arange(10, 50, 0.5)
fake_curve = numpy.random.random(len(depths))
fake_curve[-10:] = numpy.nan # Add some null values at the bottom
plt.plot(depths, fake_curve)
l = lasio.LASFile()
l.header
l.well.DATE = str(datetime.datetime.today())
l.params['ENGI'] = lasio.HeaderItem("ENGI", "", "kinverarity@hotmail.com", "Creator of this file...")
l.other = "Example of how to create a LAS file from scratch using lasio"
l.add_curve('DEPT', depths, unit='m')
l.add_curve('FAKE_CURVE', fake_curve, descr='fake curve')
fn = "scratch_example_v2.las"
with open(fn, mode="w") as f: # Write LAS file to disk
l.write(f)
with open(fn, mode="r") as f: # Show the result...
print(f.read())
plt.plot(l['DEPT'], l['FAKE_CURVE'])
os.remove(fn)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Let's add some information to the header
Step4: Next, let's make a new item in the ~Parameters section for the operator. To do this we need to make a new HeaderItem
Step5: And finally, add some free text to the ~Other section
Step6: Step 3
Step7: Step 4
Step8: and let's see if that worked
|
14,421 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
# set default figure size
from pylab import rcParams
rcParams['figure.figsize'] = 16, 8
import pandas as pd
import urllib2
def load_data(ip_addr):
data = pd.read_csv(urllib2.urlopen("http://%s:7645/data.csv" % (ip_addr)))
for col in ['Model', 'Scenario', 'Variable']:
data[col] = data[col].astype('category')
data['Date'] = data['Date'].astype('datetime64')
data['Temperature'] = data['Value'] - 273.15
return data
def do_graph(df):
model = df.loc[1,'Model']
df['Year'] = df['Date'].map(lambda d: "%d-01-01" % (d.year)).astype('datetime64')
by_year = df.groupby(['Year', 'Scenario']).max().loc[:,['Temperature']]
groups = by_year.reset_index().set_index('Year').groupby('Scenario')
for key, grp in groups:
plt.plot(grp.index, grp['Temperature'], label=key)
plt.legend(loc='best')
plt.title("Maximum mean temperature for warmest month using model %s" % (model))
plt.xlabel("Year")
plt.ylabel("Temperature [Celsius]")
plt.show()
# Note: make sure you pass load_data the correct IP address. This is only an example.
data = load_data("localhost")
data.head()
do_graph(data)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading Data into a Dataframe
Step2: Plotting the Scenarios
Step3: Putting it all Together
|
14,422 | <ASSISTANT_TASK:>
Python Code:
import torch
x = torch.Tensor(5, 3)
print(x)
len(x)
x.shape
y = torch.rand(5,3)
print(y)
print(x + y)
print(torch.add(x, y))
result = torch.Tensor(5, 3)
print(result)
torch.add(x, y, out=result)
print(result)
print('before y:', y)
y.add_(x)
print('after y:', y)
x.t_()
# numpy 스럽게 사용 가능
print(x[:, 1])
print(x[:,:])
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
a.add_(1)
print(a)
print(b)
# a와 b가 연결되어 있음
print(b)
a.add_(2)
print(b)
id(a)
id(b)
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
%%time
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
x + y
torch.cuda.is_available()
torch.cuda.current_device()
torch.cuda.device_count()
import torch
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad=True)
print(x)
y = x + 2
print(y)
print(y.grad_fn)
# y는 연산의 결과라서 grad_fn이 잇음
z = y * y * 3
out = z.mean()
print(z, out)
print(x.grad)
out.backward()
print(x.grad)
x = torch.randn(3)
x = Variable(x, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
dtype = torch.FloatTensor
N, D_in, H, D_out = 64, 1000, 100, 10
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # 2x2 windown max pooling
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
params = list(net.parameters())
print(len(params))
print(params[0].size())
input = Variable(torch.randn(1, 1, 32, 32))
out = net(input)
print(out)
net.zero_grad()
out.backward(torch.randn(1, 10))
output = net(input)
target = Variable(torch.arange(1, 11)) # a dummy target, for example
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
# inputs, labels = Variable(inputs), Variable(labels)
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
# imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: x.copy_(y), x.t_()는 x가 변경되는 연산
Step2: 기타 연산 자료
Step3: CharTensor를 제외하고 CPU상의 모든 텐서는 numpy로 변환하는 것을 지원
Step4: tensor들은 .cuda function을 사용해 gpu로 옮길 수 있음
Step5: Autograd
Step6: Autograd / Function 문서
Step7: Define the neural network that has some learnable parameters (or weights)
|
14,423 | <ASSISTANT_TASK:>
Python Code:
import json
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
def requests_retry_session(
retries=3,
backoff_factor=0.3,
status_forcelist=(500, 502, 504),
session=None,
):
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
api_url = "https://modelmywatershed.org/api/"
def get_job_result(api_url, s, jobrequest):
url_tmplt = api_url + "jobs/{job}/"
get_url = url_tmplt.format
result = ''
while not result:
get_req = requests_retry_session(session=s).get(get_url(job=jobrequest['job']))
result = json.loads(get_req.content)['result']
return result
s = requests.Session()
APIToken = '<YOUR API TOKEN STRING>' # ENTER YOUR OWN API TOKEN
s.headers.update({
'Authorization': APIToken,
'Content-Type': 'application/json'
})
from shapely.geometry import box, MultiPolygon
width = 0.0004 # Looks like using a width smaller than 0.0002 causes a problem with the API?
# GOOS: (-88.5552, 40.4374) elev 240.93. Agriculture Site—Goose Creek (Corn field) Site (GOOS) at IML CZO
# SJER: (-119.7314, 37.1088) elev 403.86. San Joaquin Experimental Reserve Site (SJER) at South Sierra CZO
lon, lat = -119.7314, 37.1088
bbox = box(lon-0.5*width, lat-0.5*width, lon+0.5*width, lat+0.5*width)
payload = MultiPolygon([bbox]).__geo_interface__
json_payload = json.dumps(payload)
payload
# convenience function, to simplify the request calls, below
def analyze_api_request(api_name, s, api_url, json_payload):
post_url = "{}analyze/{}/".format(api_url, api_name)
post_req = requests_retry_session(session=s).post(post_url, data=json_payload)
jobrequest_json = json.loads(post_req.content)
# Fetch and examine job result
result = get_job_result(api_url, s, jobrequest_json)
return result
result = analyze_api_request('land/2011_2011', s, api_url, json_payload)
type(result), result.keys()
result['survey'].keys()
categories = result['survey']['categories']
len(categories), categories[1]
land_categories_nonzero = [d for d in categories if d['coverage'] > 0]
land_categories_nonzero
result = analyze_api_request('terrain', s, api_url, json_payload)
categories = result['survey']['categories']
len(categories), categories
[d for d in categories if d['type'] == 'average']
result = analyze_api_request('climate', s, api_url, json_payload)
categories = result['survey']['categories']
len(categories), categories[:2]
ppt = [d['ppt'] for d in categories]
tmean = [d['tmean'] for d in categories]
# ppt is in cm, right?
sum(ppt)
import calendar
import numpy as np
calendar.mdays
# Annual tmean needs to be weighted by the number of days per month
sum(np.asarray(tmean) * np.asarray(calendar.mdays[1:]))/365
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MMW production API endpoint base url.
Step2: The job is not completed instantly and the results are not returned directly by the API request that initiated the job. The user must first issue an API request to confirm that the job is complete, then fetch the results. The demo presented here performs automated retries (checks) until the server confirms the job is completed, then requests the JSON results and converts (deserializes) them into a Python dictionary.
Step3: 2. Construct AOI GeoJSON for job request
Step4: 3. Issue job requests, fetch job results when done, then examine results. Repeat for each request type
Step5: Issue job request
Step6: Everything below is just exploration of the results. Examine the content of the results (as JSON, and Python dictionaries)
Step7: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items
Step8: Issue job request
Step9: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items
Step10: Issue job request
Step11: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items
|
14,424 | <ASSISTANT_TASK:>
Python Code:
from dionysus import Simplex, Filtration, StaticPersistence, \
vertex_cmp, data_cmp, data_dim_cmp, \
DynamicPersistenceChains
from math import sqrt
scx = [Simplex((2,), 0), # C
Simplex((0,), 1), # A
Simplex((1,), 1), # B
Simplex((0,1), 2), # AB
Simplex((1,2), 3), # BC
Simplex((0,2), 3), # AC
Simplex((0,1,2), 4), # ABC
]
f = Filtration(scx, data_cmp)
p = DynamicPersistenceChains(f)
p.pair_simplices()
smap = p.make_simplex_map(f)
print "{:>10}{:>10}{:>10}{:>10}".format("First", "Second", "Birth", "Death")
for i in (i for i in p if i.sign()):
b = smap[i]
if i.unpaired():
print "{:>10}{:>10}{:>10}{:>10}".format(b, '', b.data, "inf")
else:
d = smap[i.pair()]
print "{:>10}{:>10}{:>10}{:>10}".format(b, d, b.data, d.data)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will compute persistent homology of a 2-simplex (triangle) ABC. The filtration is as follows
Step2: Now the persistent homology is computed.
Step3: Now output the computed persistence diagram. For each critical cell that appears in the filtration the time of Birth and Death is given as well as the cell that kills it (its pair). The features that persist forever have Death value set to inf.
|
14,425 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
import subprocess
import matplotlib.pyplot as plt
import random
import numpy as np
plt.style.use('ggplot')
figsize(10,5)
file = "./bwa/input2.sorted.bam"
p = subprocess.Popen(["samtools", "view", "-q10", "-F260", file],
stdout=subprocess.PIPE)
coords = []
for line in p.stdout:
flag, ref, start = line.decode('utf-8').split()[1:4]
coords.append([flag, ref, start])
coords[:3]
len(coords)
random.seed(1234)
sample = random.sample(coords, 1000000)
len(sample)
uniqueStarts = {'watson': set(), 'crick': set()}
for coord in sample:
flag, ref, start = coord
if int(flag) & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
len(uniqueStarts['watson'])
len(uniqueStarts['crick'])
NRF_input = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)
print(NRF_input)
def calculateNRF(filePath, pickSample=True, sampleSize=10000000, seed=1234):
p = subprocess.Popen(['samtools', 'view', '-q10', '-F260', filePath],
stdout=subprocess.PIPE)
coordType = np.dtype({'names': ['flag', 'ref', 'start'],
'formats': ['uint16', 'U10', 'uint32']})
coordArray = np.empty(10000000, dtype=coordType)
i = 0
for line in p.stdout:
if i >= len(coordArray):
coordArray = np.append(coordArray, np.empty(1000000, dtype=coordType), axis=0)
fg, rf, st = line.decode('utf-8').split()[1:4]
coordArray[i] = np.array((fg, rf, st), dtype=coordType)
i += 1
coordArray = coordArray[:i]
sample = coordArray
if pickSample and len(coordArray) > sampleSize:
np.random.seed(seed)
sample = np.random.choice(coordArray, sampleSize, replace=False)
uniqueStarts = {'watson': set(), 'crick': set()}
for read in sample:
flag, ref, start = read
if flag & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
NRF = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)
return NRF
NRF_chip = calculateNRF("./bwa/sox2_chip.sorted.bam", sampleSize=1000000)
print(NRF_chip)
plt.bar([0,2],[NRF_input, NRF_chip], width=1)
plt.xlim([-0.5,3.5]), plt.xticks([0.5, 2.5], ['Input', 'ChIP'])
plt.xlabel('Sample')
plt.ylabel('NRF')
plt.ylim([0, 1.25]), plt.yticks(np.arange(0, 1.2, 0.2))
plt.plot((-0.5,3.5), (0.8,0.8), 'red', linestyle='dashed')
plt.show()
countList = []
with open('./bedtools/input_coverage.bed', 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
countList[0:6]
countList[-15:]
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
countList.sort()
countList[0:6]
countSum = sum(countList)
countSum
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
countFraction[-5:]
winNumber = len(countFraction)
winNumber
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
winFraction[-5:]
def calculateSES(filePath):
countList = []
with open(filePath, 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
countList.sort()
countSum = sum(countList)
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
winNumber = len(countFraction)
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
return [winFraction, countFraction]
chipSes = calculateSES("./bedtools/sox2_chip_coverage.bed")
plt.plot(winFraction, countFraction, label='input')
plt.plot(chipSes[0], chipSes[1], label='Sox2 ChIP')
plt.ylim([0,1])
plt.xlabel('Ordered window franction')
plt.ylabel('Genome coverage fraction')
plt.legend(loc='best')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calculate the Nonredundant Read Fraction (NRF)
Step2: Make figures prettier and biger
Step3: Parse the SAM file and extract the unique start coordinates.
Step4: Next we read the file using samtools. From each read we need to store the flag, chromosome name and start coordinate.
Step5: What is the total number of our unique reads?
Step6: Randomly sample the coordinates to get 1M for NRF calculations
Step7: How many of those coordinates are unique? (We will use the set python object which only the unique items.)
Step8: How many on the Watson strand?
Step9: And on the Crick?
Step10: Calculate the NRF
Step11: Lets create a function from what we did above and apply it to all of our files!
Step12: Calculate the NRF for the chip-seq sample
Step13: Plot the NRF!
Step14: Calculate the Signal Extraction Scaling
Step15: Lets see where do our reads align to the genome. Plot the distribution of tags along the genome.
Step16: Now sort the list- order the windows based on the tag count
Step17: Sum all the aligned tags
Step18: Calculate the summaric fraction of tags along the ordered windows.
Step19: Look at the last five items of the list
Step20: Calculate the number of windows.
Step21: Calculate what fraction of a whole is the position of each window.
Step22: Look at the last five items of our new list
Step23: Now prepare the function!
Step24: Use our function to calculate the signal extraction scaling for the Sox2 ChIP sample
Step25: Now we can plot the calculated fractions for both the input and ChIP sample
|
14,426 | <ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
DATA_DIR = './traffic-signs-data/'
training_file = DATA_DIR + 'train.p'
validation_file= DATA_DIR + 'valid.p'
testing_file = DATA_DIR + 'test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
originals = X_train
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).shape[0]
print("Size of training set =", n_train)
print("Size of testing set =", n_test)
print("Size of validation set =", n_validation)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
import pandas as pd
sign_names = pd.read_csv('signnames.csv')
### plot random image and corresponding class
index = random.randint(0, n_train)
image = X_train[index]
label = y_train[index]
names = list(sign_names.iloc[:,1])
names = [n.strip() for n in names]
print (label, names[label])
plt.imshow(image)
plt.show()
#print ("Normalized image")
#norm = normalize_gs(image)
#plt.gray()
#plt.imshow(norm.reshape((32,32)))
#plt.show()
unique, counts = np.unique(y_train, return_counts = True)
print ('Chart shows label counts')
#print (sign_names.iloc[:,1])
fig = plt.figure(figsize=(15,10))
#print (unique, len(names))
plt.bar(unique, counts)
plt.xticks(unique, names, rotation = 90)
plt.show()
# Visualizations will be shown in the notebook.
%matplotlib inline
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import cv2
from sklearn.utils import shuffle
def normalize_gs(mat):
'''perform RGB -> grayscale
perform histogram normalization to improve contrast'''
mat = cv2.cvtColor(mat, cv2.COLOR_RGB2GRAY)
mat = cv2.equalizeHist(mat)
# needed to keep tensorflow sane, it wants 32x32x1 images!
mat = np.expand_dims(mat, axis=2)
return mat
### naive value normalize --> doesnt work very well
def normalize(img):
return (img - 128) / 128
## normalize all images in the training / test set
## will not convert to greyscale as I believe that color information is
## a very important and distinct feature for traffic signs
X_train = [normalize_gs(x) for x in X_train]
X_test = [normalize_gs(x) for x in X_test]
X_valid = [normalize_gs(x) for x in X_valid]
### plot random image and corresponding class
index = random.randint(0, n_train)
image = X_train[index]
label = y_train[index]
print (label, names[label])
ax = plt.subplot(121)
plt.imshow(originals[index])
ax = plt.subplot(122)
plt.gray()
plt.imshow(np.array(image).reshape((32,32)))
plt.show()
### Tensorflow import and setup
import tensorflow as tf
EPOCHS = 512
BATCH_SIZE = 128
MODEL_FILE = './traffic_sign'
DROPOUT_PROB = 0.80
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x64. Output = 17x17x64.
# changed conv kernel size to 16x16, depth of 64
conv1_W = tf.Variable(tf.truncated_normal(shape=(16, 16, 1, 32), mean = mu, stddev = sigma))
#conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(32))
# changed padding to same to cover a larger area
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 8x8x64.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 32, 64), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(64))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 2x2x64. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 200.
# modified: input shape 576
fc1_W = tf.Variable(tf.truncated_normal(shape=(256, 128), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(128))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# add a dropout layer
fc1 = tf.nn.dropout(fc1, dropout_prob)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 100.
fc2_W = tf.Variable(tf.truncated_normal(shape=(128, 100), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(100))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# add a dropout layer
fc2 = tf.nn.dropout(fc2, dropout_prob)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
## MODIFIED: output dim now 43 (n_classes)
fc3_W = tf.Variable(tf.truncated_normal(shape=(100, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
#print (logits.summary())
return logits
#x = tf.placeholder(tf.float32, (None, 32, 32, 3))
x = tf.placeholder(tf.float32, (None, 32, 32,1))
y = tf.placeholder(tf.int32, (None))
dropout_prob = tf.placeholder(tf.float32)
one_hot_y = tf.one_hot(y, n_classes)
rate = 0.0001 # default is 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
## Model Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, dropout_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
### function to get logits
def predict(image_data):
sess = tf.get_default_session()
predictions = sess.run(logits, feed_dict={x:image_data, dropout_prob: 1.0})
### NOTE: Added SoftMax function here as per the assignment requirements!
### top_k now represents the top 5 softmax probabilities
softmax_predictions = sess.run(tf.nn.softmax(predictions))
top_k = sess.run(tf.nn.top_k(softmax_predictions), k=5))
### also added softmax function to predictions
return softmax_predictions.eval(), top_k
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, dropout_prob: DROPOUT_PROB})
validation_accuracy = evaluate(X_valid, y_valid )
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
if validation_accuracy > 0.943:
print ("High Accuracy Reached. Breaking off!")
break
saver.save(sess, MODEL_FILE)
print("Model saved")
saver = tf.train.Saver()
MODEL_FILE = './traffic_sign'
# Create Graph
print ("Evaluating Model on Testing Set")
with tf.Session() as sess:
saver.restore(sess, MODEL_FILE)
validation_accuracy = evaluate(X_test, y_test)
#print("EPOCH {} ...".format(i+1))
print("Validation Accuracy (test) = {:.3f}".format(validation_accuracy))
validation_accuracy = evaluate(X_valid, y_valid)
#print("EPOCH {} ...".format(i+1))
print("Validation Accuracy (validation) = {:.3f}".format(validation_accuracy))
validation_accuracy = evaluate(X_train, y_train)
#print("EPOCH {} ...".format(i+1))
print("Validation Accuracy (training) = {:.3f}".format(validation_accuracy))
saver = tf.train.Saver()
MODEL_FILE = './traffic_sign'
# Create Graph
print ("Evaluating Model on Testing Set")
with tf.Session() as sess:
saver.restore(sess, MODEL_FILE)
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
IMAGES_DIR = "./web_images"
image_files = os.listdir(IMAGES_DIR)
#print (image_files)
images = [mpimg.imread(os.path.join(IMAGES_DIR,f)) for f in image_files]
## preprocess
images_preprocessed = [normalize_gs(img) for img in images]
for i in images_preprocessed:
print (i.shape)
for idx,i in enumerate(images):
print (image_files[idx])
f = plt.subplot(121)
plt.imshow(i)
f = plt.subplot(122)
plt.gray()
plt.imshow(images_preprocessed[idx].reshape((32,32)))
plt.show()
## evaluate preprocessed images on the model
with tf.Session() as sess:
saver.restore(sess, MODEL_FILE)
predictions, top_k = predict(images_preprocessed)
#print (list(predictions))
for i,p in enumerate(predictions.eval()):
amax = np.argmax(p)
print ("File name", image_files[i], "Logit argmax", amax, "equivalent to class", names[amax] )
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
N = 5.0
correct = 4
acc = correct / N
print("Accuracy is", acc)
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
print(top_k)
print(top_k[1][0])
width = 1.5
bar_x = range(5)
plt.figure(2, figsize=(12,5))
ax = plt.subplot(151)
plt.bar(bar_x, top_k[0][0])
plt.xticks(bar_x, [names[k] for k in top_k[1][0]], rotation = 90)
ax.set_title('Limit30')
ax = plt.subplot(152)
plt.bar(bar_x, top_k[0][1])
plt.xticks(bar_x, [names[k] for k in top_k[1][1]], rotation = 90)
ax.set_title('Yield')
ax = plt.subplot(153)
plt.bar(bar_x, top_k[0][2])
plt.xticks(bar_x, [names[k] for k in top_k[1][2]], rotation = 90)
ax.set_title('Lights')
ax = plt.subplot(154)
plt.bar(bar_x, top_k[0][3])
plt.xticks(bar_x, [names[k] for k in top_k[1][3]], rotation = 90)
ax.set_title('STOP')
ax = plt.subplot(155)
plt.bar(bar_x, top_k[0][4])
plt.xticks(bar_x, [names[k] for k in top_k[1][4]], rotation = 90)
ax.set_title('Roundabout')
plt.show()
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Step3: Step 2
Step4: Model Architecture
Step5: Features and Labels
Step6: Training Pipeline
Step7: Train, Validate and Test the Model
Step8: Restore Model and Calculate Performance with the Test Set, to check for over or underfitting
Step9: Step 3
Step10: Predict the Sign Type for Each Image
Step11: Analyze Performance
Step12: Output Top 5 Softmax Probabilities For Each Image Found on the Web
Step13: Project Writeup
|
14,427 | <ASSISTANT_TASK:>
Python Code:
# Plots will be show inside the notebook
%matplotlib notebook
import matplotlib.pyplot as plt
# NumPy is a package for manipulating N-dimensional array objects
import numpy as np
# Pandas is a data analysis package
import pandas as pd
import problem_unittests as tests
# Load data and print the first n = 5 rows
# URL: http://www-bcf.usc.edu/~gareth/ISL/Income1.csv
DATA_URL = './resources/Income1.csv'
data = pd.read_csv(DATA_URL, index_col=0)
print(data.head(n=5))
# Put the second (education index) and third (income index) row in a NumPy array
X_data = data['Education'].values
y_data = data['Income'].values
plt.figure()
plt.scatter(X_data, y_data, label='Training data')
plt.title('Education vs. Income')
plt.xlabel('Education index')
plt.ylabel('Income index')
plt.grid(linestyle='dotted')
plt.legend()
plt.show()
def build_X(x_data):
Return design matrix given an array of N samples with d dimensions.
# Create matrix Ax1 if d = 1
if x_data.ndim == 1:
x_data = np.expand_dims(x_data, axis=1)
# Find the number of samples and dimensions
nb_samples = x_data.shape[0]
nb_dimensions = x_data.shape[1]
# Create Nxd+1 matrix filled with ones
_X = np.ones((nb_samples, nb_dimensions + 1))
# Paste in the data we have in the new matrix
_X[:nb_samples, 1:nb_dimensions + 1] = x_data
return _X
# Test and see that the design matrix was built correctly
tests.test_build_x(build_X)
def build_y(y_data):
Return a column vector containing the target values y.
# Make a copy of the argument that we can work on
_y = y_data.copy()
# Create y matrix Nx1
# Return result
return _y
### Do *not* modify the following line ###
# Test and see that the y vector was built correctly
tests.test_build_y(build_y)
def compute_weights(X, y):
Return a vector of weights found by the derived closed-form solution.
weights = None
# Implement closed-form solution here
return weights
### Do *not* modify the following line ###
# Test and see that the weights are calculated correctly
tests.test_compute_theta(compute_weights)
# Build design matrix (TASK)
X = None
# Build y vector (TASK)
y = None
# Learn linear model (TASK)
W = None
# Print weights
print('The learned linear model looks like this:')
print('Y = {:.3f} x + {:.3f}'.format(W[1, 0], W[0, 0]))
# Plot hyperplane and training data
xs = np.linspace(X_data.min(), X_data.max(), num=50)
ys = np.dot(build_X(xs), W)
plt.figure()
plt.scatter(X_data, y_data, label='Training data')
plt.plot(xs, ys, color='Red', linewidth=1, label='Fit')
plt.title('Education vs. Income')
plt.xlabel('Education index')
plt.ylabel('Income index')
plt.grid(linestyle='dotted')
plt.legend()
plt.show()
import time
# A library for easily displaying progress meters
import tqdm
# Contains all built-in optimisation tools in Keras, such as stochastic gradient descent
from keras import optimizers
# An input "layer" and a densely-connected neural network layer
from keras.layers import Input, Dense
# Model is an API that wraps our linear regression model
from keras.models import Model
# There is only a *single* feature
input_X = Input(shape=(1,))
# The output of the model is a single value
output_y = Dense(units=1, use_bias=True)(input_X)
# We give the input and output to our Model API
model = Model(inputs=input_X, outputs=output_y)
# Print a summary of the model
model.summary()
#
# Start by setting some user options
#
# Learning rate (set very small so we can clearly see the training progress)
lr = 0.0001
# Number of times to apply the update rule
nb_iterations = 100
# Number of samples to include each iteration (used to compute gradients)
nb_samples = 30
# Create optimiser using Keras
sgd = optimizers.SGD(lr=lr)
# Add the optimiser to our model, make it optimise mean squared error
model.compile(optimizer=sgd, loss='mean_squared_error')
fig, ax = plt.subplots(1,1)
# Perform `nb_iterations` update rule applications
for i in tqdm.tqdm(np.arange(nb_iterations)):
# Learn by calling the `fit` method
model.fit(X_data, y_data,
batch_size=nb_samples,
epochs=1,
verbose=0)
# Make a plot of the data and the current fit
xs = np.linspace(X_data.min(), X_data.max(), num=50)
ys = model.predict(xs)
ax.clear()
ax.scatter(X_data, y_data, label='Training data')
ax.plot(xs, ys, color='Red', linewidth=1, label='Fit')
ax.set_xlabel('Education index')
ax.set_ylabel('Income index')
ax.grid(linestyle='dotted')
ax.legend()
fig.canvas.draw()
time.sleep(0.05)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With Pandas we can load the aforementioned CSV data.
Step2: With the data loaded we can plot it as a scatter plot using matplotlib.
Step4: Modelling
Step6: Task I
Step8: Task II
Step9: Task III
Step10: <div class="alert alert-info">
Step11: Critical Analysis
Step12: The input to our model is a single scalar value (Education). The output is also a single scalar value (Income).
Step13: Notice in the print above how the fully-connected layer Dense() has two trainable parameters. One is the weight (slope), while the second is the bias (intercept). Keras adds bias units by default, but it can be turned off by setting use_bias=False.
Step14: Now that both the model definition and the optimiser is set up we can start training. Training using the Keras model API is done by calling the fit() method.
|
14,428 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
14,429 | <ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# Setting the equation parameters
image_data = image_data.astype(np.float32)
a = 0.1
b = 0.9
x_min = np.min(image_data)
x_max = np.max(image_data)
for idx in range(len(image_data)):
x = image_data[idx]
image_data[idx] = a + ((x - x_min) * (b - a) / (x_max - x_min))
return image_data
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32, [None, features_count])
labels = tf.placeholder(tf.float32, [None, labels_count])
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal([features_count, labels_count]))
biases = tf.Variable(tf.zeros([labels_count]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 30
learning_rate = 0.05
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
Step7: Problem 2
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
|
14,430 | <ASSISTANT_TASK:>
Python Code:
2 + 3
2*3
2**3
sin(pi)
from math import sin, pi
sin(pi)
a = 10
a
c =
from pruebas_1 import prueba_1_1
prueba_1_1(_, c)
A = [2, 4, 8, 10]
A
A*2
f = lambda x: x**2 + 1
f(2)
def g(x):
y = x**2 + 1
return y
g(2)
def cel_a_faren(grados_cel):
grados_faren = # Escribe el codigo para hacer el calculo aqui
return grados_faren
cel_a_faren(10)
cel_a_faren(50)
from pruebas_1 import prueba_1_2
prueba_1_2(cel_a_faren)
for dato in A:
print dato*2
B = []
for dato in A:
B.append(dato*2)
B
C = [] # Escribe el codigo para declarar el primer arreglo adentro de los corchetes
C
D = []
# Escribe el codigo de tu ciclo for aqui
D
from pruebas_1 import prueba_1_3
prueba_1_3(C, D)
f = lambda x: x**3 + 2*x**2 + 10*x - 20
f(1.0)
f(2.0)
x_1, x_2 = 1.0, 2.0
xm1 = (x_1 + x_2)/2.0
f(xm1)
x_1, x_2 = x_1, xm1
xm2 = (x_1 + x_2)/2.0
f(xm2)
def biseccion(x1, x2):
return (x1 + x2)/2.0
x_1, x_2 = x_1, xm1
xm2 = biseccion(x_1, x_2)
f(xm2)
x_1, x_2 = 1.0, 2.0
xm1 = biseccion(x_1, x_2)
f(xm1)
if x_2*xm1 > 0:
x_2 = xm1
else:
x_1 = xm1
xm2 = biseccion(x_1, x_2)
f(xm2)
if x_2*xm2 > 0:
x_2 = xm2
else:
x_1 = xm2
xm3 = biseccion(x_1, x_2)
f(xm3)
n = (log(1) - log(0.001))/(log(2))
n
def metodo_biseccion(funcion, x1, x2, n):
xs = []
for i in range(n):
xs.append(biseccion(x1, x2))
if funcion(x2)*funcion(xs[-1]) > 0:
x2 = xs[-1]
else:
x1 = xs[-1]
return xs[-1]
metodo_biseccion(f, 1.0, 2.0, 10)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sin embargo no existen funciones trigonométricas cargadas por default. Para esto tenemos que importarlas de la libreria math
Step2: Variables
Step3: Ejercicio
Step4: Ejecuta la prueba de abajo para saber si has creado el codigo correcto
Step5: Listas
Step6: Pero si intentamos multiplicar estos datos por un numero, no tendrá el comportamiento esperado.
Step7: Funciones
Step8: Esta linea de codigo es equivalente a definir una función matemática de la siguiente manera
Step9: Esta notación que introducimos es muy util para funciones matemáticas, pero esto nos obliga a pensar en las definiciones de una manera funcional, lo cual no siempre es la solución (sobre todo en un lenguaje con un paradigma de programación orientado a objetos).
Step10: Con los mismos resultados
Step11: Ejercicio
Step12: Y para probar trata de convertir algunos datos
Step13: Ciclos de control
Step14: ó agregarlo en una lista nueva
Step15: y aun muchas cosas mas, pero por ahora es momento de empezar con la práctica.
Step16: Ejecuta las pruebas de abajo
Step17: Método de bisección
Step18: Una vez que tenemos dos puntos de los que sabemos que definen el intervalo donde se encuetra una raiz, podemos empezar a iterar para descubrir el punto medio.
Step19: Y de aqui podemos notar que el resultado que nos dio esto es positivo, es decir que la raiz tiene que estar entre $x_1$ y $x_M$. Por lo que para nuestra siguiente iteración usaremos el nuevo intervalo $x_1 = 1$ y $x_2 = 2.875$, es decir que ahora asignaremos el valor de $x_M$ a $x_2$.
Step20: Y podriamos seguir haciendo esto hasta que tengamos la exactitud que queremos, pero esa no seria una manera muy inteligente de hacerlo (tenemos una maquina a la que le gusta hacer tareas repetitivas y no la aprovechamos?).
Step21: Si volvemos a ejecutar el codigo que teniamos, sustituyendo esta función, obtendremos exactamente el mismo resultado
Step22: Y ahora lo que tenemos que hacer es poner una condicion para que $x_M$ se intercambie con $x_1$ ó $x_2$ dependiendo del signo.
Step23: Si, yo se que parece raro, pero si lo revisas con calma te daras cuenta que funciona.
Step24: Es decir, $n = 10$.
|
14,431 | <ASSISTANT_TASK:>
Python Code::
from sklearn.svm import SVC
from sklearn.metrics import classification_report
# create a linear SVC model with balanced class weights
model = SVC(C=1, kernel='linear', class_weight='balanced')
# fit model
model.fit(X_train, y_train)
# make predictions on test data
y_pred = model.predict(X_test)
# create a dataframe of feature coefficients
coef = pd.DataFrame(model.coef_,columns=X_train.columns)
print(coef)
# print classification report
print(classification_report(y_test, y_pred))
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,432 | <ASSISTANT_TASK:>
Python Code:
!pip install --upgrade pip
!pip install -q -U tfx
import os
import tempfile
import urllib
import pandas as pd
import tensorflow_model_analysis as tfma
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import ml_metadata as mlmd
print('MLMD version: {}'.format(mlmd.__version__))
from ml_metadata.proto import metadata_store_pb2
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
# TODO
# Join various path components
_data_filepath = os.path.join(_data_root, "penguins_processed.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
# TODO
interactive_context = InteractiveContext()
# TODO
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
interactive_context.run(example_gen)
# TODO
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
interactive_context.run(statistics_gen)
# TODO
infer_schema = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True)
interactive_context.run(infer_schema)
# Define the module file for the Trainer component
trainer_module_file = 'penguin_trainer.py'
%%writefile {trainer_module_file}
# Define the training algorithm for the Trainer module file
import os
from typing import List, Text
import tensorflow as tf
from tensorflow import keras
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# Features used for classification - culmen length and depth, flipper length,
# body mass, and species.
_LABEL_KEY = 'species'
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema, batch_size: int) -> tf.data.Dataset:
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY), schema).repeat()
def _build_keras_model():
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
d = keras.layers.Dense(8, activation='relu')(d)
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
return model
def run_fn(fn_args: tfx.components.FnArgs):
schema = schema_pb2.Schema()
tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema)
train_dataset = _input_fn(
fn_args.train_files, fn_args.data_accessor, schema, batch_size=10)
eval_dataset = _input_fn(
fn_args.eval_files, fn_args.data_accessor, schema, batch_size=10)
model = _build_keras_model()
model.fit(
train_dataset,
epochs=int(fn_args.train_steps / 20),
steps_per_epoch=20,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
model.save(fn_args.serving_model_dir, save_format='tf')
trainer = tfx.components.Trainer(
module_file=os.path.abspath(trainer_module_file),
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=50))
interactive_context.run(trainer)
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/penguins_classification')
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='species', signature_name='serving_default')
],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(
class_name='SparseCategoricalAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.6})))
])
],
slicing_specs=[tfma.SlicingSpec()])
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
schema=infer_schema.outputs['schema'],
eval_config=eval_config)
interactive_context.run(evaluator)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
interactive_context.run(pusher)
connection_config = interactive_context.metadata_connection_config
store = mlmd.MetadataStore(connection_config)
# All TFX artifacts are stored in the base directory
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
def display_types(types):
# Helper function to render dataframes for the artifact and execution types
table = {'id': [], 'name': []}
for a_type in types:
table['id'].append(a_type.id)
table['name'].append(a_type.name)
return pd.DataFrame(data=table)
def display_artifacts(store, artifacts):
# Helper function to render dataframes for the input artifacts
table = {'artifact id': [], 'type': [], 'uri': []}
for a in artifacts:
table['artifact id'].append(a.id)
artifact_type = store.get_artifact_types_by_id([a.type_id])[0]
table['type'].append(artifact_type.name)
table['uri'].append(a.uri.replace(base_dir, './'))
return pd.DataFrame(data=table)
def display_properties(store, node):
# Helper function to render dataframes for artifact and execution properties
table = {'property': [], 'value': []}
for k, v in node.properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
for k, v in node.custom_properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
return pd.DataFrame(data=table)
display_types(store.get_artifact_types())
pushed_models = store.get_artifacts_by_type("PushedModel")
display_artifacts(store, pushed_models)
pushed_model = pushed_models[-1]
display_properties(store, pushed_model)
def get_one_hop_parent_artifacts(store, artifacts):
# Get a list of artifacts within a 1-hop of the artifacts of interest
artifact_ids = [artifact.id for artifact in artifacts]
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids(artifact_ids)
if event.type == mlmd.proto.Event.OUTPUT)
artifacts_ids = set(
event.artifact_id
for event in store.get_events_by_execution_ids(executions_ids)
if event.type == mlmd.proto.Event.INPUT)
return [artifact for artifact in store.get_artifacts_by_id(artifacts_ids)]
# TODO
parent_artifacts = get_one_hop_parent_artifacts(store, [pushed_model])
display_artifacts(store, parent_artifacts)
exported_model = parent_artifacts[0]
display_properties(store, exported_model)
model_parents = get_one_hop_parent_artifacts(store, [exported_model])
display_artifacts(store, model_parents)
used_data = model_parents[0]
display_properties(store, used_data)
display_types(store.get_execution_types())
def find_producer_execution(store, artifact):
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids([artifact.id])
if event.type == mlmd.proto.Event.OUTPUT)
return store.get_executions_by_id(executions_ids)[0]
# TODO
trainer = find_producer_execution(store, exported_model)
display_properties(store, trainer)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install and import TFX
Step2: Please ignore the incompatibility error and warnings. Make sure to re-run the cell.
Step3: Import the MLMD library.
Step4: Download the dataset
Step5: Create an InteractiveContext
Step6: Construct the TFX Pipeline
Step7: Instantiate and run the StatisticsGen Component
Step8: Instantiate and run the SchemaGen Component
Step9: Instantiate and run the Trainer Component
Step10: Run the Trainer component.
Step11: Evaluate and push the model
Step12: Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information.
Step13: Create some helper functions to view the data from the MD store.
Step14: First, query the MD store for a list of all its stored ArtifactTypes.
Step15: Next, query all PushedModel artifacts.
Step16: Query the MD store for the latest pushed model. This notebook has only one pushed model.
Step17: One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model.
Step18: Query the parent artifacts for the pushed model.
Step19: Query the properties for the model.
Step20: Query the upstream artifacts for the model.
Step21: Get the training data the model trained with.
Step22: Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types.
Step23: The training step is the ExecutionType named tfx.components.trainer.component.Trainer. Traverse the MD store to get the trainer run that corresponds to the pushed model.
|
14,433 | <ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('../data/titanic.csv', index_col='PassengerId')
df.head()
df_no_missing = df[['Survived', 'Pclass', 'Fare', 'Age', 'Sex']].dropna()
X_train_withStrings = df_no_missing[['Pclass', 'Fare', 'Age', 'Sex']]
y_train = df_no_missing['Survived']
def strings_to_int(df, target_column):
df_mod = df.copy()
targets_to_rename = df_mod[target_column].unique()
map_to_int = {name: n for n, name in enumerate(targets_to_rename)}
df_mod[target_column] = df_mod[target_column].replace(map_to_int)
return df_mod
X_train = strings_to_int(X_train_withStrings, "Sex")
X_train.head()
clf = DecisionTreeClassifier(random_state=241)
clf.fit(X_train, y_train)
importances = pd.Series(clf.feature_importances_, index = list(X_train))
importances
print(' '.join(importances.sort_values(ascending=False).head(2).index.values))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Оставьте в выборке четыре признака
Step2: 6. Обучите решающее дерево с параметром random_state=241 и остальными параметрами по умолчанию.
Step3: 7. Вычислите важности признаков и найдите два признака с наибольшей важностью. Их названия будут ответами для данной задачи (в качестве ответа укажите названия признаков через запятую без пробелов).
|
14,434 | <ASSISTANT_TASK:>
Python Code:
import os
import requests
import pandas as pd
import csv
import urllib2
import openpyxl
import csv
def xls_state():
path_year = os.path.join(os.getcwd())
file_name = path_year + "/" + "MSA_STATE"+ ".xls"
url= "https://www.census.gov/2010census/xls/fips_codes_website.xls"
f = urllib2.urlopen(url)
data = f.read()
with open(file_name, "wb") as code:
code.write(data)
def xls_principal():
path_year = os.path.join(os.getcwd())
file_name = path_year + "/" + "MSA_principal"+ ".xls"
url= "http://www.census.gov/population/metro/files/lists/2015/List2.xls"
f = urllib2.urlopen(url)
data = f.read()
with open(file_name, "wb") as code:
code.write(data)
def main():
Main execution
xls_state()
xls_principal()
#######################
### Execution ########
#######################
if __name__ == '__main__':
main()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: INGESTION
Step2: Define function taht downloads census xls file that contains cbsa and the corresponding msa name and principal cities that belong to that msa
Step4: MAIN EXECUTION
|
14,435 | <ASSISTANT_TASK:>
Python Code:
from datetime import datetime
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
from metpy.io import get_upper_air_data
from metpy.io.upperair import UseSampleData
from metpy.plots import SkewT
from metpy.units import concatenate
with UseSampleData(): # Only needed to use our local sample data
# Download and parse the data
dataset = get_upper_air_data(datetime(1999, 5, 4, 0), 'OUN')
p = dataset.variables['pressure'][:]
T = dataset.variables['temperature'][:]
Td = dataset.variables['dewpoint'][:]
u = dataset.variables['u_wind'][:]
v = dataset.variables['v_wind'][:]
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Calculate LCL height and plot as black dot
l = mpcalc.lcl(p[0], T[0], Td[0])
lcl_temp = mpcalc.dry_lapse(concatenate((p[0], l)), T[0])[-1].to('degC')
skew.plot(l, lcl_temp, 'ko', markerfacecolor='black')
# Calculate full parcel profile and add to plot as black line
prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
skew.plot(p, prof, 'k', linewidth=2)
# Example of coloring area between profiles
greater = T >= prof
skew.ax.fill_betweenx(p, T, prof, where=greater, facecolor='blue', alpha=0.4)
skew.ax.fill_betweenx(p, T, prof, where=~greater, facecolor='red', alpha=0.4)
# An example of a slanted line at constant T -- in this case the 0
# isotherm
l = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Show the plot
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a new figure. The dimensions here give a good aspect ratio
|
14,436 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn
import warnings
warnings.filterwarnings('ignore')
from collections import OrderedDict
from time import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import fmin_powell
from scipy import integrate
import theano as thno
import theano.tensor as T
def run_models(df, upper_order=5):
'''
Convenience function:
Fit a range of pymc3 models of increasing polynomial complexity.
Suggest limit to max order 5 since calculation time is exponential.
'''
models, traces = OrderedDict(), OrderedDict()
for k in range(1,upper_order+1):
nm = 'k{}'.format(k)
fml = create_poly_modelspec(k)
with pm.Model() as models[nm]:
print('\nRunning: {}'.format(nm))
pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Normal())
traces[nm] = pm.sample(2000, init=None)
return models, traces
def plot_traces(traces, retain=1000):
'''
Convenience function:
Plot traces with overlaid means and values
'''
ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()})
for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'
,xytext=(5,10), textcoords='offset points', rotation=90
,va='bottom', fontsize='large', color='#AA0022')
def create_poly_modelspec(k=1):
'''
Convenience function:
Create a polynomial modelspec string for patsy
'''
return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j)
for j in range(2,k+1)])).strip()
data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt',
'education-categorical', 'educ',
'marital-status', 'occupation',
'relationship', 'race', 'sex',
'captial-gain', 'capital-loss',
'hours', 'native-country',
'income'])
data.head(10)
data = data[~pd.isnull(data['income'])]
data[data['native-country']==" United-States"]
income = 1 * (data['income'] == " >50K")
age2 = np.square(data['age'])
data = data[['age', 'educ', 'hours']]
data['age2'] = age2
data['income'] = income
income.value_counts()
g = seaborn.pairplot(data)
# Compute the correlation matrix
corr = data.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = seaborn.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
with pm.Model() as logistic_model:
pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial())
trace_logistic_model = pm.sample(4000)
plot_traces(trace_logistic_model, retain=1000)
plt.figure(figsize=(9,7))
trace = trace_logistic_model[1000:]
seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391")
plt.xlabel("beta_age")
plt.ylabel("beta_educ")
plt.show()
# Linear model with hours == 50 and educ == 12
lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*12 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 16
lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*16 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 19
lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*19 +
samples['hours']*50)))
# Plot the posterior predictive distributions of P(income > $50K) vs. age
pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15)
pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15)
pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15)
import matplotlib.lines as mlines
blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education')
green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors')
red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School')
plt.legend(handles=[blue_line, green_line, red_line], loc='lower right')
plt.ylabel("P(Income > $50K)")
plt.xlabel("Age")
plt.show()
b = trace['educ']
plt.hist(np.exp(b), bins=20, normed=True)
plt.xlabel("Odds Ratio")
plt.show()
lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)
print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub)))
models_lin, traces_lin = run_models(data, 4)
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm], models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')
g = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm])[0]
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic')
g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.
Step2: Scrubbing and cleaning
Step3: Exploring the data
Step4: We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income
Step5: Some results
Step6: So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models
Step7: Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.
Step8: Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics
Step9: Model selection
Step10: There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.
|
14,437 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import cv2
import sys
import os
sys.path.insert(0, os.path.abspath('..'))
import salientregions as sr
import cProfile
%pylab inline
#Load the image
path_to_image = 'images/graffiti.jpg'
img = cv2.imread(path_to_image)
sr.show_image(img)
%%timeit
#Time: creation of the detector
det = sr.SalientDetector(SE_size_factor=0.20,
lam_factor=4)
det = sr.SalientDetector(SE_size_factor=0.20,
lam_factor=4)
%%timeit
#Time: detect all regions in color image
regions = det.detect(img,
find_holes=True,
find_islands=True,
find_indentations=True,
find_protrusions=True,
visualize=False)
cProfile.run('det.detect(img, find_holes=True, find_islands=True, find_indentations=True, \
find_protrusions=True, visualize=False)')
%%timeit
#Only holes and islands
regions = det.detect(img,
find_holes=True,
find_islands=True,
find_indentations=False,
find_protrusions=False,
visualize=False)
lam_factor = 3
area_factor_large = 0.001
area_factor_verylarge = 0.1
lam = 50
connectivity = 4
weights=(0.33,0.33,0.33)
grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
%%timeit
#Creation of the binarizer
binarizer = sr.DatadrivenBinarizer(area_factor_large=area_factor_large, area_factor_verylarge=area_factor_verylarge,
lam=lam, weights=weights, connectivity=connectivity)
binarizer = sr.DatadrivenBinarizer(area_factor_large=area_factor_large, area_factor_verylarge=area_factor_verylarge,
lam=lam, weights=weights, connectivity=connectivity)
%%timeit
#The binarization
binarized = binarizer.binarize(grayscale, visualize=False)
cProfile.run('binarizer.binarize(grayscale, visualize=False)')
binarized = binarizer.binarize(grayscale, visualize=False)
regions = det.detect(img,
find_holes=True,
find_islands=True,
find_indentations=True,
find_protrusions=True,
visualize=False)
se = det.SE
area_factor=0.05
%%timeit
detector = sr.BinaryDetector(se, lam, area_factor, connectivity)
regions = detector.detect(binarized, visualize=False)
detector = sr.BinaryDetector(se, lam, area_factor, connectivity)
cProfile.run('detector.detect(binarized, visualize=False)')
#Only holes and islands
detector = sr.BinaryDetector(se, lam, area_factor, connectivity)
cProfile.run('detector.detect(binarized, find_indentations=False, \
find_protrusions=False, visualize=False)')
mser = cv2.MSER_create()
%%timeit
regions = mser.detectRegions(img, None)
cProfile.run('mser.detectRegions(img, None)')
%timeit cv2.morphologyEx(binarized, cv2.MORPH_TOPHAT, se)
%timeit cv2.morphologyEx(binarized, cv2.MORPH_OPEN, se)
%timeit cv2.erode(binarized, se)
%timeit cv2.dilate(binarized, se)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Binarization
Step2: Binary detection
Step3: MSER detection
Step4: Conclusion
|
14,438 | <ASSISTANT_TASK:>
Python Code:
from neon.initializers import Gaussian
from neon.optimizers import GradientDescentMomentum, Schedule
from neon.layers import Conv, Dropout, Activation, Pooling, GeneralizedCost
from neon.transforms import Rectlin, Softmax, CrossEntropyMulti, Misclassification
from neon.models import Model
from neon.data import CIFAR10
from neon.callbacks.callbacks import Callbacks
from neon.backends import gen_backend
be = gen_backend(batch_size=128, backend='gpu')
# hyperparameters
learning_rate = 0.05
weight_decay = 0.001
num_epochs = 25
print "Loading Data"
dataset = CIFAR10(path='data/', normalize=False,
contrast_normalize=True, whiten=True,
pad_classes=True) # CIFAR10 has 10 classes, network has 16 outputs, so we pad some extra classes.
train_set = dataset.train_iter
valid_set = dataset.valid_iter
print "Building Model"
init_uni = Gaussian(scale=0.05)
opt_gdm = GradientDescentMomentum(learning_rate=float(learning_rate), momentum_coef=0.9,
wdecay=float(weight_decay),
schedule=Schedule(step_config=[200, 250, 300], change=0.1))
relu = Rectlin()
conv = dict(init=init_uni, batch_norm=False, activation=relu)
convp1 = dict(init=init_uni, batch_norm=False, activation=relu, padding=1)
convp1s2 = dict(init=init_uni, batch_norm=False, activation=relu, padding=1, strides=2)
layers = [
Conv((3, 3, 64), **convp1),
Conv((3, 3, 64), **convp1s2),
Conv((3, 3, 128), **convp1),
Conv((3, 3, 128), **convp1s2),
Conv((3, 3, 128), **convp1),
Conv((1, 1, 128), **conv),
Conv((1, 1, 16), **conv),
Pooling(8, op="avg"),
Activation(Softmax())]
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
mlp = Model(layers=layers)
# configure callbacks
callbacks = Callbacks(mlp, output_file='data.h5', eval_set=valid_set, eval_freq=1)
print "Training"
mlp.fit(train_set, optimizer=opt_gdm, num_epochs=num_epochs, cost=cost, callbacks=callbacks)
print('Misclassification error = %.1f%%' % (mlp.eval(valid_set, metric=Misclassification())*100))
from neon.visualizations.figure import cost_fig, hist_fig, deconv_summary_page
from neon.visualizations.data import h5_cost_data, h5_hist_data, h5_deconv_data
from bokeh.plotting import output_notebook, show
cost_data = h5_cost_data('data.h5', False)
output_notebook()
show(cost_fig(cost_data, 400, 800, epoch_axis=False))
layers = [
Conv((3, 3, 64), **convp1),
Conv((3, 3, 64), **convp1s2),
Dropout(keep=.5), # Added Dropout
Conv((3, 3, 128), **convp1),
Conv((3, 3, 128), **convp1s2),
Dropout(keep=.5), # Added Dropout
Conv((3, 3, 128), **convp1),
Conv((1, 1, 128), **conv),
Conv((1, 1, 16), **conv),
Pooling(8, op="avg"),
Activation(Softmax())]
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
mlp = Model(layers=layers)
# configure callbacks
callbacks = Callbacks(mlp, output_file='data.h5', eval_set=valid_set, eval_freq=1)
print "Training"
mlp.fit(train_set, optimizer=opt_gdm, num_epochs=num_epochs, cost=cost, callbacks=callbacks)
print('Misclassification error = %.1f%%' % (mlp.eval(valid_set, metric=Misclassification())*100))
from neon.visualizations.figure import cost_fig, hist_fig, deconv_summary_page
from neon.visualizations.data import h5_cost_data, h5_hist_data, h5_deconv_data
from bokeh.plotting import output_notebook, show
cost_data = h5_cost_data('data.h5', False)
output_notebook()
show(cost_fig(cost_data, 400, 800, epoch_axis=False))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overfitting
Step2: This situation illustrates the importance of plotting the validation loss (blue) in addition to the training cost (red). The training cost may mislead the user into thinking that model is continuing to perform well, but we can see from the validation loss that the model has begun to overfit.
Step3: We then plot the results of the training run below.
|
14,439 | <ASSISTANT_TASK:>
Python Code:
# Put your code here!
# Put your code here!
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/NOKKHPQ0oKn1B7e23?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 2
Step3: Assignment wrapup
|
14,440 | <ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.tree import DecisionTreeClassifier
from sklearn import datasets
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Create decision tree classifer object using gini
clf = DecisionTreeClassifier(criterion='gini', random_state=0)
# Train model
model = clf.fit(X, y)
# Make new observation
observation = [[ 5, 4, 3, 2]]
# Predict observation's class
model.predict(observation)
# View predicted class probabilities for the three classes
model.predict_proba(observation)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Dataset
Step2: Create Decision Tree Using Gini Impurity
Step3: Train Model
Step4: Create Observation To Predict
Step5: Predict Observation
Step6: View Predicted Probabilities
|
14,441 | <ASSISTANT_TASK:>
Python Code:
import qspectra as qs
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Parameters of the electronic Hamiltonian
ham = qs.ElectronicHamiltonian(np.array([[12881., 120.], [120., 12719.]]),
bath=qs.DebyeBath(qs.CM_K * 77., 35., 106.),
dipoles=[[1., 0., 0.], [2. * np.cos(.3), 2. * np.sin(.3), 0.]])
# Bath parameters for the Redfield and HEOM models
red_dimer = qs.RedfieldModel(ham, hilbert_subspace='gef', discard_imag_corr=True, unit_convert=qs.CM_FS)
heom_dimer = qs.HEOMModel(ham, hilbert_subspace='gef', unit_convert=qs.CM_FS, level_cutoff=3, low_temp_corr=False)
# Bath parameters for the ZOFE model:
# pseudomode bath fit to the Drude spectral density for FMO for 77K of Ishizaki and Fleming
# (each PM is represented by a Lorentzian at frequency Omega, with width gamma, and of strength huang
# in the bath correlation SPECTRUM, NOT spectral density)
Omega = [-500., -200., -90., 1., 21., 60., 80., 130., 200., 300., 400., 500., 600., 800., 1100., 1500.] # frequencies of PMs
gamma = [500., 100., 50., 50., 50., 50., 80., 40., 80., 150., 200., 200., 80., 250., 200., 300.] # dampings of the PMs
huang = [-2.5133e-03, -7.5398e-03, -2.5133e-02, 5.0265e+01, 2.2619e+00, 4.5239e-02, 2.7646e-01,
9.2991e-03, 2.2619e-02, 1.5080e-02, 3.0159e-03, 3.5186e-03, 2.8274e-04, 1.7593e-03,
4.3982e-04, 4.3982e-04] # Huang-Rhys factors of PMs (couplings to PMs)
n_sites = ham.n_sites
numb_pm = len(Omega)
on = np.ones(n_sites, complex)
Omega = np.array([Omega[pm]*on for pm in range(numb_pm)])
huang = np.array([huang[pm]*on for pm in range(numb_pm)])
gamma = np.array([gamma[pm]*on for pm in range(numb_pm)])
zofe_ham = qs.ElectronicHamiltonian(ham.H('e'),
bath=qs.PseudomodeBath(numb_pm, Omega, gamma, huang),
dipoles=ham.dipoles)
zofe_dimer = qs.ZOFEModel(zofe_ham, hilbert_subspace='ge', unit_convert=qs.CM_FS)
f, X = qs.absorption_spectra(heom_dimer, time_max=10000)
f2, X2 = qs.absorption_spectra(zofe_dimer, time_max=10000)
f3, X3 = qs.absorption_spectra(red_dimer, time_max=10000)
plt.plot(f, X, label='HEOM')
plt.plot(f2, X2, label='ZOFE')
plt.plot(f3, X3, label='Redfield')
plt.xlabel('Frequency (cm$^{-1}$)')
plt.ylabel('Absorption [arb. unit]')
plt.xlim(12500, 13200)
plt.legend();
%%time
(f1, t2, f3), X = qs.two_dimensional_spectra(red_dimer, coherence_time_max=1000,
population_times=np.linspace(0, 1000, 50),
geometry='-++', polarization='xxxx',
include_signal='GSB,ESE,ESA')
plt.figure(figsize=(8, 8))
plt.contourf(f1, f3, X[:,5,:].real, 30, cmap='RdBu', vmax=6e5, vmin=-6e5)
plt.xlabel('Coherence frequency (cm$^{-1}$)')
plt.ylabel('Rephasing frequency (cm$^{-1}$)')
plt.xlim(12300, 13300)
plt.ylim(12300, 13300);
%%time
(f1, t2, f3), X = qs.two_dimensional_spectra(heom_dimer, coherence_time_max=1000,
population_times=np.linspace(0, 1000, 50),
geometry='-++', polarization='xxxx',
include_signal='GSB,ESE,ESA')
plt.figure(figsize=(8, 8))
plt.contourf(f1, f3, X[:,5,:].real, 30, cmap='RdBu', vmax=6e5, vmin=-6e5)
plt.xlabel('Coherence frequency (cm$^{-1}$)')
plt.ylabel('Rephasing frequency (cm$^{-1}$)')
plt.xlim(12300, 13300)
plt.ylim(12300, 13300);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Absorption spectra
Step2: 2D spectra
|
14,442 | <ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
## YOUR CODE HERE
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
## YOUR CODE HERE
predictions = 1. / (1. + np.exp(- scores))
return predictions
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative -= 2 * l2_penalty * coefficient
return derivative
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
subtable = table[['word', 'coefficients [L2=0]']]
ptable = sorted(subtable, key=lambda x: x['coefficients [L2=0]'], reverse=True)[:5]
ntable = sorted(subtable, key=lambda x: x['coefficients [L2=0]'], reverse=False)[:5]
positive_words = [w['word'] for w in ptable]
print positive_words
negative_words = [w['word'] for w in ntable]
print negative_words
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and process review dataset
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: Now, let us take a look at what the dataset looks like (Note
Step4: Train-Validation split
Step5: Convert SFrame to NumPy array
Step6: We convert both the training and validation sets into NumPy arrays.
Step7: Building on logistic regression with no L2 penalty assignment
Step8: Adding L2 penalty
Step9: Quiz question
Step10: Quiz question
Step11: Explore effects of L2 regularization
Step12: Compare coefficients
Step13: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step14: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Step15: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step16: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
Step17: Quiz Question
Step18: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
|
14,443 | <ASSISTANT_TASK:>
Python Code:
# Author: Ivana Kojcic <ivana.kojcic@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com>
# Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
# In this example, raw data will be simulated for the sample subject, so its
# information needs to be loaded. This step will download the data if it not
# already on your machine. Subjects directory is also set so it doesn't need
# to be given to functions.
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
meg_path = op.join(data_path, 'MEG', subject)
# First, we get an info structure from the sample subject.
fname_info = op.join(meg_path, 'sample_audvis_raw.fif')
info = mne.io.read_info(fname_info)
tstep = 1 / info['sfreq']
# To simulate sources, we also need a source space. It can be obtained from the
# forward solution of the sample subject.
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
src = fwd['src']
# To simulate raw data, we need to define when the activity occurs using events
# matrix and specify the IDs of each event.
# Noise covariance matrix also needs to be defined.
# Here, both are loaded from the sample dataset, but they can also be specified
# by the user.
fname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif')
fname_cov = op.join(meg_path, 'sample_audvis-cov.fif')
events = mne.read_events(fname_event)
noise_cov = mne.read_cov(fname_cov)
# Standard sample event IDs. These values will correspond to the third column
# in the events matrix.
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
# Take only a few events for speed
events = events[:80]
activations = {
'auditory/left':
[('G_temp_sup-G_T_transv-lh', 30), # label, activation (nAm)
('G_temp_sup-G_T_transv-rh', 60)],
'auditory/right':
[('G_temp_sup-G_T_transv-lh', 60),
('G_temp_sup-G_T_transv-rh', 30)],
'visual/left':
[('S_calcarine-lh', 30),
('S_calcarine-rh', 60)],
'visual/right':
[('S_calcarine-lh', 60),
('S_calcarine-rh', 30)],
}
annot = 'aparc.a2009s'
# Load the 4 necessary label names.
label_names = sorted(set(activation[0]
for activation_list in activations.values()
for activation in activation_list))
region_names = list(activations.keys())
def data_fun(times, latency, duration):
Function to generate source time courses for evoked responses,
parametrized by latency and duration.
f = 15 # oscillating frequency, beta band [Hz]
sigma = 0.375 * duration
sinusoid = np.sin(2 * np.pi * f * (times - latency))
gf = np.exp(- (times - latency - (sigma / 4.) * rng.rand(1)) ** 2 /
(2 * (sigma ** 2)))
return 1e-9 * sinusoid * gf
times = np.arange(150, dtype=np.float) / info['sfreq']
duration = 0.03
rng = np.random.RandomState(7)
source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep)
for region_id, region_name in enumerate(region_names, 1):
events_tmp = events[np.where(events[:, 2] == region_id)[0], :]
for i in range(2):
label_name = activations[region_name][i][0]
label_tmp = mne.read_labels_from_annot(subject, annot,
subjects_dir=subjects_dir,
regexp=label_name,
verbose=False)
label_tmp = label_tmp[0]
amplitude_tmp = activations[region_name][i][1]
if region_name.split('/')[1][0] == label_tmp.hemi[0]:
latency_tmp = 0.115
else:
latency_tmp = 0.1
wf_tmp = data_fun(times, latency_tmp, duration)
source_simulator.add_data(label_tmp,
amplitude_tmp * wf_tmp,
events_tmp)
# To obtain a SourceEstimate object, we need to use `get_stc()` method of
# SourceSimulator class.
stc_data = source_simulator.get_stc()
raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd,
cov=None)
raw_sim.set_eeg_reference(projection=True)
mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0)
mne.simulation.add_eog(raw_sim, random_state=0)
mne.simulation.add_ecg(raw_sim, random_state=0)
# Plot original and simulated raw data.
raw_sim.plot(title='Simulated raw data')
epochs = mne.Epochs(raw_sim, events, event_id, tmin=-0.2, tmax=0.3,
baseline=(None, 0))
evoked_aud_left = epochs['auditory/left'].average()
evoked_vis_right = epochs['visual/right'].average()
# Visualize the evoked data
evoked_aud_left.plot(spatial_colors=True)
evoked_vis_right.plot(spatial_colors=True)
method, lambda2 = 'dSPM', 1. / 9.
inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov)
stc_aud = mne.minimum_norm.apply_inverse(
evoked_aud_left, inv, lambda2, method)
stc_vis = mne.minimum_norm.apply_inverse(
evoked_vis_right, inv, lambda2, method)
stc_diff = stc_aud - stc_vis
brain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1,
hemi='split', views=['lat', 'med'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In order to simulate source time courses, labels of desired active regions
Step3: Create simulated source activity
Step4: Here,
Step5: Simulate raw data
Step6: Extract epochs and compute evoked responsses
Step7: Reconstruct simulated source time courses using dSPM inverse operator
|
14,444 | <ASSISTANT_TASK:>
Python Code:
%load_ext snakeviz
%load_ext memory_profiler
%load_ext line_profiler
%load_ext autoreload
%autoreload 2
import re
from collections import Counter
def words(text):
return re.findall(r'\w+', text.lower())
WORDS = Counter(words(open('big.txt').read()))
def P(word, N=sum(WORDS.values())):
return WORDS[word] / N
def candidates(word):
return (known([word]) or known(edits1(word)) or known(edits2(word)) or [word])
def known(words):
return set(w for w in words if w in WORDS)
def edits1(word):
letters = 'abcdefghijklmnopqrstuvwxyz'
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
def edits2(word):
return (e2 for e1 in edits1(word) for e2 in edits1(e1))
def word_correction(word):
return max(candidates(word), key=P)
def sentence_correction(sentence):
return " ".join(word_correction(word) for word in sentence.split(" "))
sentence_correction('grofilingg is not rocet Sgience')
! time python script.py 'grofilingg is not rocet Sgience'
%time sentence_correction('grofilingg is not rocet Sgience')
! python -m timeit -s "..."
%timeit sentence_correction('grofilingg is not rocet Sgience')
%memit sentence_correction('grofilingg is not rocet Sgience')
! python -m cProfile script.py 'grofilingg is not rocet Sgience'
%prun sentence_correction('grofilingg is not rocet Sgience')
! snakeviz script.prof
%snakeviz sentence_correction('grofilingg is not rocet Sgience')
! python -m memory_profiler script.py 'grofilingg is not rocet Sgience'
from script import sentence_correction, edits1
%mprun -f edits1 sentence_correction('grofilingg is not rocet Sgience')
! pyinstrument script.py 'grofilingg is not rocet Sgience'
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: amdahl law, focus on one part at a time
Step2: Casual Profiling 👕👖
Step3: timeit ⌛⌛⌛
Step4: each run does thousand or millions of repetitions compensating for very fast operations
Step5: Casual Profiling Landscape ⛰️
Step6: Snakeviz 🐍
Step7: Memory profiler 💾
Step8: Offline Profiling Landscape ⛰️
|
14,445 | <ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Mark Wronkiewicz <wronk.mark@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.preprocessing import maxwell_filter
print(__doc__)
data_path = mne.datasets.sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
# Preprocess with Maxwell filtering
raw = mne.io.Raw(raw_fname)
raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads
# Here we don't use tSSS (set st_duration) because MGH data is very clean
raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
# Select events to extract epochs from, pick M/EEG channels, and plot evoked
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1}
events = mne.find_events(raw, 'STI 014')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=[], exclude='bads')
for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(eog=150e-6),
preload=False)
evoked = epochs.average()
evoked.plot(window_title=kind)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
|
14,446 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'besm-2-7', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
14,447 | <ASSISTANT_TASK:>
Python Code:
import recordlinkage
from recordlinkage.datasets import load_febrl1
dfA = load_febrl1()
dfA
indexer = recordlinkage.Index()
indexer.full()
candidate_links = indexer.index(dfA)
print (len(dfA), len(candidate_links))
# (1000*1000-1000)/2 = 499500
indexer = recordlinkage.Index()
indexer.block("given_name")
candidate_links = indexer.index(dfA)
len(candidate_links)
compare_cl = recordlinkage.Compare()
compare_cl.exact("given_name", "given_name", label="given_name")
compare_cl.string("surname", "surname", method="jarowinkler", threshold=0.85, label="surname")
compare_cl.exact("date_of_birth", "date_of_birth", label="date_of_birth")
compare_cl.exact("suburb", "suburb", label="suburb")
compare_cl.exact("state", "state", label="state")
compare_cl.string("address_1", "address_1", threshold=0.85, label="address_1")
features = compare_cl.compute(candidate_links, dfA)
features.head(10)
features.describe()
features.sum(axis=1).value_counts().sort_index(ascending=False)
matches = features[features.sum(axis=1) > 3]
matches
import recordlinkage
from recordlinkage.datasets import load_febrl1
dfA = load_febrl1()
# Indexation step
indexer = recordlinkage.Index()
indexer.block(left_on="given_name")
candidate_links = indexer.index(dfA)
# Comparison step
compare_cl = recordlinkage.Compare()
compare_cl.exact("given_name", "given_name", label="given_name")
compare_cl.string("surname", "surname", method="jarowinkler", threshold=0.85, label="surname")
compare_cl.exact("date_of_birth", "date_of_birth", label="date_of_birth")
compare_cl.exact("suburb", "suburb", label="suburb")
compare_cl.exact("state", "state", label="state")
compare_cl.string("address_1", "address_1", threshold=0.85, label="address_1")
features = compare_cl.compute(candidate_links, dfA)
# Classification step
matches = features[features.sum(axis=1) > 3]
print(len(matches))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dataset is loaded with the following code. The returned datasets are
Step2: Make record pairs
Step3: With the method index, all possible (and unique) record pairs are
Step4: Many of these record pairs do not belong to the same person. The
Step5: The argument "given_name" is the blocking variable. This variable has
Step6: The comparing of record pairs starts when the compute method is
Step7: The last step is to decide which records belong to the same person. In
Step8: Full code
|
14,448 | <ASSISTANT_TASK:>
Python Code:
# import common packages
import numpy as np
from collections import OrderedDict
# lib from Qiskit Aqua Chemistry
from qiskit_aqua_chemistry import FermionicOperator
# lib from Qiskit Aqua
from qiskit_aqua import Operator
from qiskit_aqua import (get_algorithm_instance, get_optimizer_instance,
get_variational_form_instance, get_initial_state_instance)
# lib for driver
from qiskit_aqua_chemistry.drivers import ConfigurationManager
# using driver to get fermionic Hamiltonian
# PySCF example
cfg_mgr = ConfigurationManager()
pyscf_cfg = OrderedDict([('atom', 'Li .0 .0 .0; H .0 .0 1.6'),
('unit', 'Angstrom'),
('charge', 0),
('spin', 0),
('basis', 'sto3g')])
section = {}
section['properties'] = pyscf_cfg
driver = cfg_mgr.get_driver_instance('PYSCF')
molecule = driver.run(section)
# please be aware that the idx here with respective to original idx
freeze_list = [0]
remove_list = [-3, -2] # negative number denotes the reverse order
map_type = 'parity'
h1 = molecule._one_body_integrals
h2 = molecule._two_body_integrals
nuclear_repulsion_energy = molecule._nuclear_repulsion_energy
num_particles = molecule._num_alpha + molecule._num_beta
num_spin_orbitals = molecule._num_orbitals * 2
print("HF energy: {}".format(molecule._hf_energy - molecule._nuclear_repulsion_energy))
print("# of electrons: {}".format(num_particles))
print("# of spin orbitals: {}".format(num_spin_orbitals))
# prepare full idx of freeze_list and remove_list
# convert all negative idx to positive
remove_list = [x % molecule._num_orbitals for x in remove_list]
freeze_list = [x % molecule._num_orbitals for x in freeze_list]
# update the idx in remove_list of the idx after frozen, since the idx of orbitals are changed after freezing
remove_list = [x - len(freeze_list) for x in remove_list]
remove_list += [x + molecule._num_orbitals - len(freeze_list) for x in remove_list]
freeze_list += [x + molecule._num_orbitals for x in freeze_list]
# prepare fermionic hamiltonian with orbital freezing and eliminating, and then map to qubit hamiltonian
# and if PARITY mapping is selected, reduction qubits
energy_shift = 0.0
qubit_reduction = True if map_type == 'parity' else False
ferOp = FermionicOperator(h1=h1, h2=h2)
if len(freeze_list) > 0:
ferOp, energy_shift = ferOp.fermion_mode_freezing(freeze_list)
num_spin_orbitals -= len(freeze_list)
num_particles -= len(freeze_list)
if len(remove_list) > 0:
ferOp = ferOp.fermion_mode_elimination(remove_list)
num_spin_orbitals -= len(remove_list)
qubitOp = ferOp.mapping(map_type=map_type, threshold=0.00000001)
qubitOp = qubitOp.two_qubit_reduced_operator(num_particles) if qubit_reduction else qubitOp
qubitOp.chop(10**-10)
# Using exact eigensolver to get the smallest eigenvalue
exact_eigensolver = get_algorithm_instance('ExactEigensolver')
exact_eigensolver.init_args(qubitOp, k=1)
ret = exact_eigensolver.run()
print('The computed energy is: {:.12f}'.format(ret['eigvals'][0].real))
print('The total ground state energy is: {:.12f}'.format(ret['eigvals'][0].real + energy_shift + nuclear_repulsion_energy))
from qiskit import IBMQ
IBMQ.load_accounts()
# setup COBYLA optimizer
max_eval = 200
cobyla = get_optimizer_instance('COBYLA')
cobyla.set_options(maxiter=max_eval)
# setup HartreeFock state
HF_state = get_initial_state_instance('HartreeFock')
HF_state.init_args(qubitOp.num_qubits, num_spin_orbitals, map_type,
qubit_reduction, num_particles)
# setup UCCSD variational form
var_form = get_variational_form_instance('UCCSD')
var_form.init_args(qubitOp.num_qubits, depth=1,
num_orbitals=num_spin_orbitals, num_particles=num_particles,
active_occupied=[0], active_unoccupied=[0, 1],
initial_state=HF_state, qubit_mapping=map_type,
two_qubit_reduction=qubit_reduction, num_time_slices=1)
# setup VQE
vqe_algorithm = get_algorithm_instance('VQE')
vqe_algorithm.setup_quantum_backend(backend='statevector_simulator')
vqe_algorithm.init_args(qubitOp, 'matrix', var_form, cobyla)
results = vqe_algorithm.run()
print('The computed ground state energy is: {:.12f}'.format(results['eigvals'][0]))
print('The total ground state energy is: {:.12f}'.format(results['eigvals'][0] + energy_shift + nuclear_repulsion_energy))
print("Parameters: {}".format(results['opt_params']))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: We use the classical eigen decomposition to get the smallest eigenvalue as a reference.
Step4: Step 3
Step5: Step 4
|
14,449 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import time
import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
source_sentences[:50].split('\n')
target_sentences[:50].split('\n')
def extract_character_vocab(data):
special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
from distutils.version import LooseVersion
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 15
decoding_embedding_size = 15
# Learning Rate
learning_rate = 0.001
def get_model_inputs():
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length
def encoding_layer(input_data, rnn_size, num_layers,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# RNN cell
def make_cell(rnn_size):
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return enc_cell
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
# Process the input we'll feed to the decoder
def process_decoder_input(target_data, vocab_to_int, batch_size):
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
return dec_input
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,
target_sequence_length, max_target_sequence_length, enc_state, dec_input):
# 1. Decoder Embedding
target_vocab_size = len(target_letter_to_int)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Set up a training decoder and an inference decoder
# Training Decoder
with tf.variable_scope("decode"):
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
# 5. Inference Decoder
# Reuses the same parameters trained by the training process
with tf.variable_scope("decode", reuse=True):
start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_letter_to_int['<EOS>'])
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return training_decoder_output, inference_decoder_output
def seq2seq_model(input_data, targets, lr, target_sequence_length,
max_target_sequence_length, source_sequence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers):
# Pass the input data through the encoder. We'll ignore the encoder output, but use the state
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
source_sequence_length,
source_vocab_size,
encoding_embedding_size)
# Prepare the target sequences we'll feed to the decoder in training mode
dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)
# Pass encoder state and decoder inputs to the decoders
training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int,
decoding_embedding_size,
num_layers,
rnn_size,
target_sequence_length,
max_target_sequence_length,
enc_state,
dec_input)
return training_decoder_output, inference_decoder_output
# Build the graph
train_graph = tf.Graph()
# Set the graph to default to ensure that it is ready for training
with train_graph.as_default():
# Load the model inputs
input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()
# Create the training and inference logits
training_decoder_output, inference_decoder_output = seq2seq_model(input_data,
targets,
lr,
target_sequence_length,
max_target_sequence_length,
source_sequence_length,
len(source_letter_to_int),
len(target_letter_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers)
# Create tensors for the training logits and inference logits
training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')
inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')
# Create the weights for sequence_loss
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths
# Split data to training and validation sets
train_source = source_letter_ids[batch_size:]
train_target = target_letter_ids[batch_size:]
valid_source = source_letter_ids[:batch_size]
valid_target = target_letter_ids[:batch_size]
(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>']))
display_step = 20 # Check training loss after every 20 batches
checkpoint = "best_model.ckpt"
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(1, epochs+1):
for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(
get_batches(train_target, train_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>'])):
# Training step
_, loss = sess.run(
[train_op, cost],
{input_data: sources_batch,
targets: targets_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths})
# Debug message updating us on the status of the training
if batch_i % display_step == 0 and batch_i > 0:
# Calculate validation cost
validation_loss = sess.run(
[cost],
{input_data: valid_sources_batch,
targets: valid_targets_batch,
lr: learning_rate,
target_sequence_length: valid_targets_lengths,
source_sequence_length: valid_sources_lengths})
print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'
.format(epoch_i,
epochs,
batch_i,
len(train_source) // batch_size,
loss,
validation_loss[0]))
# Save Model
saver = tf.train.Saver()
saver.save(sess, checkpoint)
print('Model Trained and Saved')
def source_to_seq(text):
'''Prepare the text for the model'''
sequence_length = 7
return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))
input_sentence = 'hello'
text = source_to_seq(input_sentence)
checkpoint = "./best_model.ckpt"
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
#Multiply by batch_size to match the model's input parameters
answer_logits = sess.run(logits, {input_data: [text]*batch_size,
target_sequence_length: [len(text)]*batch_size,
source_sequence_length: [len(text)]*batch_size})[0]
pad = source_letter_to_int["<PAD>"]
print('Original Text:', input_sentence)
print('\nSource')
print(' Word Ids: {}'.format([i for i in text]))
print(' Input Words: {}'.format(" ".join([source_int_to_letter[i] for i in text])))
print('\nTarget')
print(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))
print(' Response Words: {}'.format(" ".join([target_int_to_letter[i] for i in answer_logits if i != pad])))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
Step2: source_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. source_sentences contains a sorted characters of the line.
Step3: Preprocess
Step4: This is the final shape we need them to be in. We can now proceed to building the model.
Step5: Hyperparameters
Step6: Input
Step7: Sequence to Sequence Model
Step8: 2.2 Decoder
Step9: Set up the decoder components
Step10: 2.3 Seq2seq model
Step11: Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this
Step14: Get Batches
Step15: Train
Step16: Prediction
|
14,450 | <ASSISTANT_TASK:>
Python Code:
import GPy, safeopt
from SafeRLBench.algo import SafeOptSwarm
from SafeRLBench.envs import Quadrocopter, LinearCar
from SafeRLBench.policy import NonLinearQuadrocopterController, LinearPolicy
from SafeRLBench.measure import BestPerformance, SafetyMeasure
from SafeRLBench import Bench
# set up logging
from SafeRLBench import config
config.logger_set_level(config.INFO)
config.logger_add_stream_handler()
config.monitor_set_verbosity(2)
noise_var = 0.05 ** 2
bounds = [(-1., 0.), (-1., 0.), (0., 1.)]
algos = [(SafeOptSwarm, [{
'policy': LinearPolicy(2, 1, par=[-1, 0, 1]),
'kernel': GPy.kern.RBF(input_dim=len(bounds), variance=std**2, lengthscale=.4, ARD=True),
'likelihood': GPy.likelihoods.gaussian.Gaussian(variance=noise_var),
'max_it': 20,
'avg_reward': -20,
'window': 3,
'fmin': -100,
'bounds': bounds,
'info': std
} for std in [30, 35, 40, 45, 50]])]
envs = [(LinearCar, {})]
bench = Bench.make_bench(algos, envs, [BestPerformance(), SafetyMeasure(-100)])
bench()
print([(t[0].alg_conf['info'], t[1], t[2]) for t in bench.measures[1].result])
noise_var = 0.05 ** 2
# Set fixed Gaussian measurement noise
likelihood = GPy.likelihoods.gaussian.Gaussian(variance=noise_var)
# Bounds on the inputs variable
bounds = [(0., 1.), (0., 1.), (0., 1.), (0., 1.), (0., 1.)]
# Define Kernel
kernel = GPy.kern.RBF(input_dim=len(bounds), variance=1000.*2, lengthscale=1.0, ARD=True)
noise_var = 0.05 ** 2
fmin = -2400
# Bounds on the inputs variable
# bounds = [(1e-2, .9), (1e-2, .9), (1e-1, .9), (.2, .7), (1e-2, .9)]
bounds = [(1e-2, 1.), (1e-2, 1.), (1e-2, 1.), (1e-2, 1.), (1e-2, 1.)]
algos = [(SafeOptSwarm, [{
'policy': NonLinearQuadrocopterController(),
'kernel': GPy.kern.RBF(input_dim=len(bounds), variance=std**2, lengthscale=0.2, ARD=True),
'likelihood': GPy.likelihoods.gaussian.Gaussian(variance=noise_var),
'max_it': 20,
'avg_reward': -1500,
'window': 3,
'fmin': fmin,
'bounds': bounds,
'swarm_size': 1000,
'info': std
} for std in [1000, 1250, 1500, 1750, 2000]])]
envs = [(Quadrocopter, {})]
bench = Bench.make_bench(algos, envs, [BestPerformance(), SafetyMeasure(fmin)])
bench()
print([(t[0].alg_conf['info'], t[1], t[2]) for t in bench.measures[1].result])
print([(t[0].alg_conf['info'], int(t[1])) for t in bench.measures[0].result])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Car
Step2: Below we output the results of the safety measure. List comprehension is used to get a more readable format for the
Step3: Quadrocopter
Step4: Below we output the results of the safety measure and performance. List comprehension is used to get a more readable format for the tuples.
|
14,451 | <ASSISTANT_TASK:>
Python Code:
import numpy
import numpy as np
import sys
import math
import matplotlib.pyplot as plt
def periodic(i,limit,add):
Choose correct matrix index with periodic boundary conditions
Input:
- i: Base index
- limit: Highest \"legal\" index
- add: Number to add or subtract from i
return (i + limit + add) % limit
size = 256 # L_x
temp = 10. # temperature T
spin_matrix = np.zeros( (size,size), np.int8) + 1
spin_matrix
E = M = 0
E_av = E2_av = M_av = M2_av = Mabs_av = 0
w = np.zeros(17, np.float64)
for de in xrange(-8,9,4):
print de
w[de+8] = math.exp(-de/temp)
print w
M = spin_matrix.sum()
print M
# for i in xrange(16): print i r
# range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements.
# xrange is a sequence object that evaluates lazily.
for j in xrange(size):
for i in xrange(size):
E -= spin_matrix.item(i,j) * (spin_matrix.item(periodic(i,size,-1),j) + spin_matrix.item(i,periodic(j,size,1)))
x = int(np.random.random()*size)
print(x)
y = int(np.random.random()*size)
print(y)
deltaE = 2*spin_matrix.item(i,j) * \
(spin_matrix.item(periodic(x,size,-1),y) + spin_matrix.item(periodic(x,size,1),y) + \
spin_matrix.item(x,periodic(y,size,-1))+spin_matrix.item(x,periodic(y,size,1)))
print(deltaE)
print( w[deltaE + 8] )
np.random.random()
print( np.random.random() <= w[deltaE+8])
print( spin_matrix[x,y] )
print( spin_matrix.item(x,y) )
spin_matrix[x,y] *= -1
M += 2*spin_matrix[x,y]
E += deltaE
print(spin_matrix.item(x,y))
print(M)
print(E)
import pygame
Lx=256; Ly=256
spin_matrix = np.zeros((Lx,Ly),np.int8)
print(spin_matrix.shape)
spin_matrix.fill(1)
spin_matrix
def initialize_allup( spin_matrix, J=1.0 ):
Lx,Ly = spin_matrix.shape
spin_matrix.fill(1)
M = spin_matrix.sum()
# Calculate initial energy
E=0
for j in xrange(Ly):
for i in xrange(Lx):
E += (-J)*spin_matrix.item(i,j) * \
(spin_matrix.item(periodic(i,Lx,+1),j) + spin_matrix.item(i,periodic(j,Ly,1)) )
print "M: ",M," E: ", E
return E,M
E,M = initialize_allup( spin_matrix)
def initialize_allup1( spin_matrix, J=1.0 ):
Lx,Ly = spin_matrix.shape
spin_matrix.fill(1)
M = spin_matrix.sum()
# Calculate initial energy
E=0
for j in xrange(Ly):
for i in xrange(Lx):
E -= J*spin_matrix.item(i,j) * \
(spin_matrix.item(periodic(i,Lx,-1),j) + spin_matrix.item(i,periodic(j,Ly,1)) )
print "M: ",M," E: ", E
return E,M
E,M = initialize_allup( spin_matrix)
Lx=512; Ly=512
spin_matrix = np.zeros((Lx,Ly),np.int8)
E,M = initialize_allup1( spin_matrix)
E,M = initialize_allup( spin_matrix)
Lx=1024; Ly=1024
print(Lx*Ly)
spin_matrix = np.zeros((Lx,Ly),np.int8)
E,M = initialize_allup1( spin_matrix)
E,M = initialize_allup( spin_matrix)
math.pow(2,31)
temp = 1.0
w = np.zeros(17,np.float32)
for de in xrange(-8,9,4): # include +8
w[de+8] = math.exp(-de/temp)
print(w)
import os
print(os.getcwd())
print(os.listdir( os.getcwd() ))
sys.path.append('./')
avgsresults_GPU = np.fromfile("./IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape(201,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
ax.scatter( T, E_avg)
plt.show()
Evar_avg = avgsresults_GPU[:,2]
plt.scatter( T, Evar_avg)
plt.show()
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
#fig = plt.figure()
#ax = fig.add_subplot(4,1,1)
plt.scatter( T, M_avg)
#fig.add_subplot(4,1,2)
#plt.scatter(T,Mvar_avg)
#fig.add_subplot(4,1,3)
#plt.scatter(T,absM_avg)
#fig.add_subplot(4,1,4)
#plt.scatter(T,M4_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
avgsresults_GPU = np.fromfile("./IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, Evar_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
avgsresults_GPU = np.fromfile("./IsingGPU/drafts/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
avgsresults_GPU = np.fromfile("./IsingGPU/drafts/IsingGPU/data/IsingMetroGPU_runs10.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, Evar_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
avgsresults_GPU = np.fromfile("./IsingGPU/drafts/IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, Evar_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
avgsresults_GPU = []
for temp in range(10,31,2):
avgsresults_GPU.append( np.fromfile("./data/ising2d_CLaigit" + str(temp) + ".bin",dtype=np.float64) )
avgsresults_GPU = np.array( avgsresults_GPU)
print( avgsresults_GPU.shape, avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
M_avg = avgsresults_GPU[:,2]
heat_cap_avg = avgsresults_GPU[:,3]
mag_sus_avg = avgsresults_GPU[:,4]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter( T, heat_cap_avg)
plt.show()
plt.scatter( T, mag_sus_avg)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Periodic boundary conditions
Step3: Set up spin matrix, initialize to ground state
Step4: Create and initialize variables
Step5: Setup array for possible energy changes
Step6: Calculate initial magnetization
Step7: Calculate initial energy
Step8: Metropolis MonteCarlo computation, 1 single step or iteration, done explicitly
Step9: Accept (if True)!
Step10: Initialize (all spins up), explicitly shown
Step11: Setup array for possible energy changes
Step12: Importing from the script ising2dim.py
Step13: Reading out data from ./IsingGPU/FileIO/output.h
Step14: For
Step15: From drafts
Step16: From CLaigit
|
14,452 | <ASSISTANT_TASK:>
Python Code:
class Character(object):
def __init__(self):
self.life = 1000
def attacked(self):
self.life -= 10
print(u"공격받음! 생명력 =", self.life)
a = Character()
b = Character()
c = Character()
a.life, b.life, c.life
a.attacked()
b.attacked()
a.attacked()
a.attacked()
a.attacked()
a.attacked()
a.attacked()
a.life, b.life, c.life
class Warrior(Character):
def __init__(self):
super(Warrior, self).__init__()
self.strength = 15
self.intelligence = 5
class Wizard(Character):
def __init__(self):
super(Wizard, self).__init__()
self.strength = 5
self.intelligence = 15
a = Warrior()
b = Wizard()
a.life, b.life
a.strength, b.strength
a.intelligence, b.intelligence
a.attacked()
b.attacked()
class Character(object):
def __init__(self):
self.life = 1000
self.strength = 10
self.intelligence = 10
def attacked(self):
self.life -= 10
print(u"공격받음! 생명력 =", self.life)
def attack(self):
print(u"공격!")
class Warrior(Character):
def __init__(self):
super(Warrior, self).__init__()
self.strength = 15
self.intelligence = 5
def attack(self):
print(u"육탄 공격!")
class Wizard(Character):
def __init__(self):
super(Wizard, self).__init__()
self.strength = 5
self.intelligence = 15
def attack(self):
print(u"마법 공격!")
a = Character()
b = Warrior()
c = Wizard()
a.attack()
b.attack()
c.attack()
a.attacked()
b.attacked()
def Dummy():
pass
d = Dummy()
class Complex(object):
def __init__(self, realpart, imagpart):
self.r = realpart
self.i = imagpart
c = Complex(1, 2)
c
str(c)
class Complex2(Complex):
def __repr__(self):
return "Complex: real = %f imag = %f" % (self.r, self.i)
def __str__(self):
return "[for str] " + self.__repr__()
c2 = Complex2(1, 1)
c2
str(c2)
class Complex3(Complex2):
def __getitem__(self, key):
if key == "r":
return self.r
if key == "i":
return self.i
c3 = Complex3(1, 2)
c3
c3["i"]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 이 클래스로 a, b, c 세 개의 캐릭터 객체를 생성한다.
Step2: 모든 객체의 초기 life 속성값은 모두 1000이다.
Step3: 하지만 공격을 받은 캐릭터의 생명력은 감소된다.
Step4: 클래스 상속
Step5: 이 클래스의 객체를 만들어 보면 명시적으로 만들지 않았지만 life라는 속성과 attacked 라는 메서드를 가진다.
Step6: 메소드 오버라이딩
Step7: 참조
Step8: 특수 메서드
Step9: __repr__ 메서드를 정의하지 않으면 object 클래스가 가진 기본 __repr__ 메서드를 사용한다. 이 함수는 클래스 이름과 변수가 위치하고 있는 메모리 주소를 <>안에 써서 반환한다. 기본 __str__ 메서드도 마찬가지이다.
Step10: 이번에는 __repr__ 메서드와 __repr__ 메서드를 다음과 같이 새로 정의하여 오버라이딩한다.
Step11: __getitem__ 메서드를 정의하면 마치 리스트나 사전처럼 [] 기호를 사용한 인덱싱을 할 수 있다.
|
14,453 | <ASSISTANT_TASK:>
Python Code:
poetry_output = !htid2rsync --f data/poetry.txt | rsync -azv --files-from=- data.sharc.hathitrust.org::features/ data/poetry/
scifi_output = !htid2rsync --f data/scifi.txt | rsync -azv --files-from=- data.sharc.hathitrust.org::features/ data/scifi/
outputs = list([poetry_output, scifi_output])
subjects = ['poetry', 'scifi']
paths = {}
suffix = '.json.bz2'
for subject, output in zip(subjects, outputs):
folder = subject
filePaths = [path for path in output if path.endswith(suffix)]
paths[subject] = [os.path.join(folder, path) for path in filePaths]
fn = 'data/' + subject + '_paths.txt'
with open(fn, 'w') as f:
for path in paths[subject]:
p = str(path) + '\n'
f.write(p)
paths = {}
subjects = ['poetry', 'scifi']
for subject in subjects:
with open('data/' + subject + '_paths.txt', 'r') as f:
paths[subject] = ['data/' + line[:len(line)-1] for line in f.readlines()]
poetry = FeatureReader(paths['poetry'])
scifi = FeatureReader(paths['scifi'])
def createWordDict(HTRC_FeatureReader_List):
wordDict = {}
i = 0
volumes = []
for f in HTRC_FeatureReader_List:
for vol in f.volumes():
volumes.append(vol)
tok_list = vol.tokenlist(pages=False)
tokens = tok_list.index.get_level_values('token')
for token in tokens:
if token not in wordDict.keys():
wordDict[token] = i
i += 1
return wordDict, volumes
wordDict, volumes = createWordDict([scifi, poetry])
dtm = np.zeros((200, len(wordDict.keys())))
for i, vol in enumerate(volumes):
tok_list = vol.tokenlist(pages=False)
counts = list(tok_list['count'])
tokens = tok_list.index.get_level_values('token')
for token, count in zip(tokens, counts):
try:
index = wordDict[token]
dtm[i, index] = count
except:
pass
X = dtm
y = np.zeros((200))
y[100:200] = 1
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.svm import LinearSVC
from sklearn import cross_validation
tfidf = TfidfTransformer()
out = tfidf.fit_transform(X, y)
model = LinearSVC()
score = cross_validation.cross_val_score(model, X, y, cv=10)
print(np.mean(score))
model.fit(X, y)
feats = np.argsort(model.coef_[0])[:50]
top_scifi = [(list(feats).index(wordDict[w]) + 1, w) for w in wordDict.keys() if wordDict[w] in feats]
sorted(top_scifi)
feats = np.argsort(model.coef_[0])[-50:]
top_poetry = [(list(feats).index(wordDict[w]) + 1, w) for w in wordDict.keys() if wordDict[w] in feats]
sorted(top_poetry, key=lambda tup: tup[0])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As in the previous notebooks, we'll construct FeatureReader objects for each corpus. The line below reads in path files we created to the downloaded data
Step2: To create our bag of words matrix, we need to keep a global dictionary of all words seen in each of our texts. We initialize "wordDict", which tracks all the words seen and records its index in the bag of words matrix. We also keep a list of volumes so that we can parse them later.
Step3: Once we construct the global dictionary, we can fill the bag of words matrix with the word counts for each volume. Once we have this, we will use it to format the training data for our model.
Step4: We can then use the TfidfTransformer to format the bag of words matrix, so that we can fit it to our LinearSVC model. Let's see how our model does.
Step5: We can also get the most helpful features, or words, for each class. First we'll fit the model
|
14,454 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import random as rnd
import seaborn as sns
import matplotlib.pyplot as plt
train_df = pd.read_csv('train.csv')
test_df = pd.read_csv('test.csv')
print(train_df.columns.values)
train_df.isnull().sum()
print (train_df.info())
train_df.describe()
train_df, test_df = train_df.drop(['Cabin', 'Ticket'], axis=1), test_df.drop(['Cabin', 'Ticket'], axis=1)
# Sacamos la descripcion de los valores que son Strings (object)
train_df.describe(include=['O'])
plt.title('Survival count between sex', size=20, y=1.1)
sns.countplot(x = 'Survived', hue='Sex', data=train_df)
#Hay una gran correlacion entre el sexo y la supervivencia
# Pasamos el sexo de string a un int, 1 para hombre y 0 para mujer
for df in [train_df, test_df]:
df['Sex'] = df['Sex'].apply(lambda x : 1 if x == 'male' else 0)
# Hay relacion directa entre la clase y la supervivencia
plt.figure(figsize=(12, 12))
plt.subplot(2,2,1)
plt.title('Survival rate / Pclass', size=15, y=1.1)
sns.barplot(x='Pclass', y = 'Survived', data=train_df, palette='muted')
sns.countplot(x = 'Survived', hue='Embarked', data=train_df)
# Tambien hay una ligera correlacion con el lugar de embarque
train_df['Embarked'] = train_df['Embarked'].fillna('S')
for dt in [train_df, test_df]:
dt['Embarked'] = dt['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
#Rellenamos el unico valor que falta de fare
test_df['Fare'] = test_df['Fare'].fillna(test_df['Fare'].median())
# Transformamos los valores continuos de fare en valores discretos, agrupando los rangos en 4 grupos, del 0 al 3
for df in [train_df, test_df]:
df['Fare'] = pd.qcut(df['Fare'], 4, labels=[0, 1, 2, 3])
train_df.head(5)
for df in [train_df, test_df]:
df['FamilySize'] = df['Parch'] + df['SibSp'] + 1
sns.barplot(x='FamilySize', y='Survived' , data=train_df)
def filter_family_size(x):
if x == 1:
return 0
elif x < 5:
return 1
else:
return 0
for df in [train_df, test_df]:
df['FamilySize'] = df['FamilySize'].apply(filter_family_size)
train_df = train_df.drop(['Parch', 'SibSp'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp'], axis=1)
corrmat = train_df.corr()
sns.heatmap(corrmat, square=True)
print ("El numero de datos Age sin rellenar: ",train_df['Age'].isnull().sum())
plt.title('Distribucion de la edad original', size=20, y=1.1)
sns.distplot(train_df['Age'].dropna())
#Rellenamos los campos edad vacios
guess_ages = np.zeros((2,3))
for dataset in [train_df, test_df]:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
print ("El numero de datos Age sin rellenar: ",train_df['Age'].isnull().sum())
plt.title('Distribucion de la edad rellena', size=20, y=1.1)
sns.distplot(train_df['Age'])
#Creamos la nueva feature y la mostramos
train_df['AgeBand'] = pd.cut(train_df['Age'], 8)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
sns.countplot(x='Survived', hue='AgeBand' , data=train_df)
for dataset in [train_df, test_df]:
dataset.loc[ dataset['Age'] <= 10, 'Age'] = 0
dataset.loc[(dataset['Age'] > 10) & (dataset['Age'] <= 20), 'Age'] = 1
dataset.loc[(dataset['Age'] > 20) & (dataset['Age'] <= 30), 'Age'] = 2
dataset.loc[(dataset['Age'] > 30) & (dataset['Age'] <= 40), 'Age'] = 3
dataset.loc[(dataset['Age'] > 40) & (dataset['Age'] <= 50), 'Age'] = 4
dataset.loc[(dataset['Age'] > 50) & (dataset['Age'] <= 60), 'Age'] = 5
dataset.loc[(dataset['Age'] > 60) & (dataset['Age'] <= 70), 'Age'] = 6
dataset.loc[ dataset['Age'] > 70, 'Age'] = 7
train_df.head()
train_df = train_df.drop(['AgeBand'], axis=1)
# Filter the name
def get_title(x):
y = x[x.find(',')+1:].replace('.', '').replace(',', '').strip().split(' ')
if y[0] == 'the': # Search for the countess
title = y[1]
else:
title = y[0]
return title
def filter_title(title, sex):
if title in ['Countess', 'Dona', 'Lady', 'Jonkheer', 'Mme', 'Mlle', 'Ms', 'Capt', 'Col', 'Don', 'Sir', 'Major', 'Rev', 'Dr']:
if sex:
return 'Rare_male'
else:
return 'Rare_female'
else:
return title
for df in [train_df, test_df]:
df['NameLength'] = df['Name'].apply(lambda x : len(x))
df['Title'] = df['Name'].apply(get_title)
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in [train_df, test_df]:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
for df in [train_df, test_df]:
df['Title'] = df.apply(lambda x: filter_title(x['Title'], x['Sex']), axis=1)
sns.countplot(y=train_df['Title'])
train_df.groupby('Title')['PassengerId'].count().sort_values(ascending=False)
# Borramos la columna Name
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
train_df.head()
X_train = train_df.drop(["Survived"], axis=1).copy()
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
from sklearn.ensemble import RandomForestClassifier
random_forest = RandomForestClassifier(n_estimators=101)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
#acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
#acc_random_forest
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf.fit(X_train, Y_train)
Y_pred = clf.predict(X_test)
clf.score(X_train, Y_train)
from sklearn.svm import SVC
svc = SVC(C=10000.0)
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
svc.score(X_train, Y_train)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
knn.score(X_train, Y_train)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
submission.to_csv('submission.csv', index=False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importamos los datos para entrenar y testear
Step2: Miramos los datos, para ver que si hay nulos o datos que rellenar, como la edad y la cabina en este caso
Step3: Faltan muchos datos de edad y cabina por rellenar, ademas de 2 embarcos
Step4: Como faltan mas de la mitad de los datos de la cabina y no contienen informacion util, se puede descartar esta feature.
Step5: Analisis a primera vista de los datos
Step6: Como faltan 2 datos de embarque de 2 personas y usaremos la feature, rellenamos con S porque
Step7: Como Parch es la abreviacion de 'parent/children', sumado y SibSp es la abreviacion de 'sibling/spouse' sumados, se pueden juntar estas 2 features en una sola que representen el tamaño de la familia que tiene esa persona, incluyendola.
Step8: De esta grafica podemos ver que las personas con 2,3 o 4 de tamaño familiar, tenian mas posibilidades de supervivencia
Step9: Rellenar la edad
Step10: Al haber introducido los nuevos datos sobre la media, la distribucion sigue siendo igual a antes de introducirlos, pero con un repunte de datos en la zona de la mediana
Step11: Convertimos el campo edad en valores de 0 al 7 siguiendo la feature banda de edades que hemos creado antes, con este cambio, banda de edades es una feature que no necesitamos ya
Step12: Clasificamos el nombre segun el titulo de una persona
Step13: Quitamos los titulos especiales y los agrupamos en categorias mas concretas
Step14: Eleccion del Modelo
Step15: Random Forest
Step16: Decision Tree
Step17: Support Vector Machines
Step18: KNN
Step19: Creamos el archivo submission para subir a kaggle
Step20: Lo guardamos en formato csv
|
14,455 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import climlab
# Get the water vapor data
#datapath = "http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/BrianRose/CESM_runs/"
datapath = "http://thredds.atmos.albany.edu:8080/thredds/dodsC/cesm/"
#endstr = "/entry.das"
atm_control = xr.open_dataset( datapath + 'som_1850_f19/som_1850_f19.cam.h0.clim.nc', decode_times=False)
Qglobal = ((atm_control.Q * atm_control.gw)/atm_control.gw.mean(dim='lat')).mean(dim=('lat','lon','time'))
# Make a model on same vertical domain as the GCM
state = climlab.column_state(lev=Qglobal.lev, water_depth=2.5)
steps_per_year = 90
deltat = climlab.constants.seconds_per_year/steps_per_year
rad = climlab.radiation.RRTMG(name='Radiation',
state=state,
specific_humidity=Qglobal.values,
timestep = deltat,
albedo = 0.25, # tuned to give reasonable ASR for reference cloud-free model
)
conv = climlab.convection.ConvectiveAdjustment(name='Convection',
state=state,
adj_lapse_rate=6.5,
timestep=rad.timestep,)
rcm_control = rad + conv
rcm_control.name = 'Radiative-Convective Model'
rcm_control.integrate_years(5)
rcm_control.ASR - rcm_control.OLR
slab_control = []
slab_control.append(rcm_control)
slab_control.append(climlab.process_like(rcm_control))
slab_2x = []
for n in range(len(slab_control)):
rcm_2xCO2 = climlab.process_like(rcm_control)
rcm_2xCO2.subprocess['Radiation'].absorber_vmr['CO2'] *= 2.
if n == 0:
rcm_2xCO2.name = 'High-sensitivity RCM'
elif n == 1:
rcm_2xCO2.name = 'Low-sensitivity RCM'
slab_2x.append(rcm_2xCO2)
# actual specific humidity
q = rcm_control.subprocess['Radiation'].specific_humidity
# saturation specific humidity (a function of temperature and pressure)
qsat = climlab.utils.thermo.qsat(rcm_control.Tatm, rcm_control.lev)
# Relative humidity
rh = q/qsat
lapse_change_factor = [+0.3, -0.3]
for n in range(len(slab_2x)):
rcm_2xCO2 = slab_2x[n]
print('Integrating ' + rcm_2xCO2.name)
for m in range(5 * steps_per_year):
# At every timestep
# we calculate the new saturation specific humidity for the new temperature
# and change the water vapor in the radiation model
# so that relative humidity is always the same
qsat = climlab.utils.thermo.qsat(rcm_2xCO2.Tatm, rcm_2xCO2.lev)
rcm_2xCO2.subprocess['Radiation'].specific_humidity[:] = rh * qsat
# We also adjust the critical lapse rate in our convection model
DeltaTs = rcm_2xCO2.Ts - rcm_control.Ts
rcm_2xCO2.subprocess['Convection'].adj_lapse_rate = 6.5 + lapse_change_factor[n]*DeltaTs
rcm_2xCO2.step_forward()
print('The TOA imbalance is %0.5f W/m2' %(rcm_2xCO2.ASR-rcm_2xCO2.OLR))
print('The ECS is %0.3f K' %(rcm_2xCO2.Ts - rcm_control.Ts))
print('')
slab_control[0].depth_bounds
# Create the domains
ocean_bounds = np.arange(0., 2010., 100.)
depthax = climlab.Axis(axis_type='depth', bounds=ocean_bounds)
ocean = climlab.domain.domain.Ocean(axes=depthax)
atm = slab_control[0].Tatm.domain
# Model 0 has a higher ocean heat diffusion coefficient --
# a more efficent deep ocean heat sink
ocean_diff = [5.E-4, 3.5E-4]
# List of deep ocean models
deep = []
for n in range(len(slab_control)):
rcm_control = slab_control[n]
# Create the state variables
Tinitial_ocean = rcm_control.Ts * np.ones(ocean.shape)
Tocean = climlab.Field(Tinitial_ocean.copy(), domain=ocean)
Tatm = climlab.Field(rcm_control.Tatm.copy(), domain=atm)
# Surface temperature Ts is the upper-most grid box of the ocean
Ts = Tocean[0:1]
atm_state = {'Tatm': Tatm, 'Ts': Ts}
rad = climlab.radiation.RRTMG(name='Radiation',
state=atm_state,
specific_humidity=Qglobal.values,
timestep = deltat,
albedo = 0.25,
)
conv = climlab.convection.ConvectiveAdjustment(name='Convection',
state=atm_state,
adj_lapse_rate=6.5,
timestep=rad.timestep,)
model = rad + conv
if n == 0:
model.name = 'RCM with high sensitivity and efficient heat uptake'
elif n == 1:
model.name = 'RCM with low sensitivity and inefficient heat uptake'
model.set_state('Tocean', Tocean)
diff = climlab.dynamics.Diffusion(state={'Tocean': model.Tocean},
K=ocean_diff[n],
diffusion_axis='depth',
timestep=deltat * 10,)
model.add_subprocess('Ocean Heat Uptake', diff)
print('')
print(model)
print('')
deep.append(model)
num_years = 400
years = np.arange(num_years+1)
Tsarray = []
Tocean = []
netrad = []
for n in range(len(deep)):
thisTs = np.nan * np.zeros(num_years+1)
thisnetrad = np.nan * np.zeros(num_years+1)
thisTocean = np.nan * np.zeros((deep[n].Tocean.size, num_years+1))
thisTs[0] = deep[n].Ts
thisnetrad[0] = deep[n].ASR - deep[n].OLR
thisTocean[:, 0] = deep[n].Tocean
Tsarray.append(thisTs)
Tocean.append(thisTocean)
netrad.append(thisnetrad)
CO2initial = deep[0].subprocess['Radiation'].absorber_vmr['CO2']
CO2array = np.nan * np.zeros(num_years+1)
CO2array[0] = CO2initial * 1E6
# Increase CO2 by 1% / year for 70 years (until doubled), and then hold constant
for y in range(num_years):
if deep[0].subprocess['Radiation'].absorber_vmr['CO2'] < 2 * CO2initial:
for model in deep:
model.subprocess['Radiation'].absorber_vmr['CO2'] *= 1.01
CO2array[y+1] = deep[0].subprocess['Radiation'].absorber_vmr['CO2'] * 1E6
print('Year ', y+1, ', CO2 mixing ratio is ', CO2array[y+1],' ppm.')
for n, model in enumerate(deep):
for m in range(steps_per_year):
qsat = climlab.utils.thermo.qsat(model.Tatm, model.lev)
model.subprocess['Radiation'].specific_humidity[:] = rh * qsat
DeltaTs = model.Ts - slab_control[n].Ts
model.subprocess['Convection'].adj_lapse_rate = 6.5 + lapse_change_factor[n]*DeltaTs
model.step_forward()
Tsarray[n][y+1] = model.Ts
Tocean[n][:, y+1] = model.Tocean
netrad[n][y+1] = model.ASR - model.OLR
colorlist = ['b', 'r']
co2color = 'k'
num_axes = len(deep) + 1
fig, ax = plt.subplots(num_axes, figsize=(12,14))
# Twin the x-axis twice to make independent y-axes.
topaxes = [ax[0], ax[0].twinx(), ax[0].twinx()]
# Make some space on the right side for the extra y-axis.
fig.subplots_adjust(right=0.85)
# Move the last y-axis spine over to the right by 10% of the width of the axes
topaxes[-1].spines['right'].set_position(('axes', 1.1))
# To make the border of the right-most axis visible, we need to turn the frame
# on. This hides the other plots, however, so we need to turn its fill off.
topaxes[-1].set_frame_on(True)
topaxes[-1].patch.set_visible(False)
for n, model in enumerate(slab_2x):
topaxes[0].plot(model.Ts*np.ones_like(Tsarray[n]), '--', color=colorlist[n])
topaxes[0].set_ylabel('Surface temperature (K)')
topaxes[0].set_xlabel('Years')
topaxes[0].set_title('Transient warming scenario: 1%/year CO2 increase to doubling, followed by CO2 stabilization', fontsize=14)
topaxes[0].legend(['Model 0', 'Model 1'], loc='lower right')
topaxes[1].plot(CO2array, color=co2color)
topaxes[1].set_ylabel('CO2 (ppm)', color=co2color)
for tl in topaxes[1].get_yticklabels():
tl.set_color(co2color)
topaxes[1].set_ylim(300., 1000.)
topaxes[2].set_ylabel('TOA imbalance (W/m2)', color='b')
for tl in topaxes[2].get_yticklabels():
tl.set_color('b')
topaxes[2].set_ylim(0, 3)
contour_levels = np.arange(-0.25, 3.25, 0.25)
for n in range(len(deep)):
cax = ax[n+1].contourf(years, deep[n].depth, Tocean[n] - Tsarray[n][0], levels=contour_levels)
ax[n+1].invert_yaxis()
ax[n+1].set_ylabel('Depth (m)')
ax[n+1].set_xlabel('Years')
for n, model in enumerate(deep):
topaxes[0].plot(Tsarray[n], color=colorlist[n])
topaxes[2].plot(netrad[n], ':', color=colorlist[n])
for n in range(len(deep)):
cax = ax[n+1].contourf(years, deep[n].depth, Tocean[n] - Tsarray[n][0], levels=contour_levels)
topaxes[1].plot(CO2array, color=co2color)
fig.subplots_adjust(bottom=0.12)
cbar_ax = fig.add_axes([0.25, 0.02, 0.5, 0.03])
fig.colorbar(cax, cax=cbar_ax, orientation='horizontal');
%load_ext version_information
%version_information numpy, matplotlib, xarray, climlab
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Integrate the control model out to equilibrium.
Step2: Now let's make two copies of this model and keep them in a list
Step3: We are going to double CO2 in both models and label them as high and low sensitivity. We will build in different feedbacks into our two columns.
Step4: We will implement a water vapor feedback as we have done before
Step5: Now here is where our two models will differ
Step6: So Model 0 (in which the lapse rates have gotten larger) is more sensitive than Model 1 (smaller lapse rates). It has a larger system gain, or a more positive overall climate feedback.
Step7: The "ocean" in these models is just a "slab" of water 2.5 meter deep.
Step8: An idealized transient global warming scenario
Step9: Transient vs. equilibrium warming
|
14,456 | <ASSISTANT_TASK:>
Python Code:
import datetime
import os
import time
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform, storage
from google.cloud.aiplatform import gapic as aip
from sklearn.preprocessing import StandardScaler
# Check the TensorFlow version installed
tf.__version__
# Enter your project, region, and a bucket name. Then run the cell to make sure the
# Cloud SDK uses the right project for all the commands in this notebook.
PROJECT = 'your-project-name' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'your-regional-bucket' # REPLACE WITH A UNIQUE REGIONAL BUCKET NAME e.g. your PROJECT NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
BUCKET_URI = 'gs://' + BUCKET
#Don't change the following command - this is to check if you have changed the project name above.
assert PROJECT != 'your-project-name', 'Don''t forget to change the project variables!'
# Initialize the Vertex SDK for Python
aiplatform.init(project=PROJECT, location=REGION, staging_bucket=BUCKET)
# Dataset parameters
target_col = 'total_rides' # The variable you are predicting
ts_col = 'service_date' # The name of the column with the date field
# Model parameters
freq = 'D' # Daily frequency
n_input_steps = 30 # Lookback window
n_output_steps = 7 # How many steps to predict forward
n_seasons = 7 # Monthly periodicity
train_split = 0.8 # % Split between train/test data
epochs = 1000 # How many passes through the data (early-stopping will cause training to stop before this)
patience = 5 # Terminate training after the validation loss does not decrease after this many epochs
lstm_units = 64
input_layer_name = 'lstm_input'
# Training parameters
MODEL_NAME = 'cta_ridership'
storage_client = storage.Client()
try:
bucket = storage_client.get_bucket(BUCKET)
print('Bucket exists, let''s not recreate it.')
except:
bucket = storage_client.create_bucket(BUCKET)
print('Created bucket: ' + BUCKET)
processed_file = 'cta_ridership.csv' # Which file to save the results to
if os.path.exists(processed_file):
input_file = processed_file # File created in previous lab
else:
input_file = f'data/{processed_file}'
df = pd.read_csv(input_file, index_col=ts_col, parse_dates=True)
# Plot 30 days of ridership
_ = df[target_col][:30].plot()
# Define some characteristics of the data that will be used later
n_features = len(df.columns)
# Index of target column. Used later when creating dataframes.
target_col_num = df.columns.get_loc(target_col)
# Split data
size = int(len(df) * train_split)
df_train, df_test = df[0:size].copy(deep=True), df[size:len(df)].copy(deep=True)
df_train.head()
_ = df_train.plot()
# Review original values
df_train.head()
# For neural networks to converge quicker, it is helpful to scale the values.
# For example, each feature might be transformed to have a mean of 0 and std. dev. of 1.
#
# You are working with a mix of features, input timesteps, output horizon, etc.
# which don't work out-of-the-box with common scaling utilities.
# So, here are a couple wrappers to handle scaling and inverting the scaling.
feature_scaler = StandardScaler()
target_scaler = StandardScaler()
def scale(df,
fit=True,
target_col=target_col,
feature_scaler=feature_scaler,
target_scaler=target_scaler):
Scale the input features, using a separate scaler for the target.
Parameters:
df (pd.DataFrame): Input dataframe
fit (bool): Whether to fit the scaler to the data (only apply to training data)
target_col (pd.Series): The column that is being predicted
feature_scaler (StandardScaler): Scaler used for features
target_scaler (StandardScaler): Scaler used for target
Returns:
df_scaled (pd.DataFrame): Scaled dataframe
target = df[target_col].values.reshape(-1, 1)
if fit:
target_scaler.fit(target)
target_scaled = target_scaler.transform(target)
# Select all columns other than target to be features
features = df.loc[:, df.columns != target_col].values
if features.shape[1]: # If there are any features
if fit:
feature_scaler.fit(features)
features_scaled = feature_scaler.transform(features)
# Combine target and features into one data frame
df_scaled = pd.DataFrame(features_scaled)
target_col_num = df.columns.get_loc(target_col)
df_scaled.insert(target_col_num, target_col, target_scaled)
df_scaled.columns = df.columns
else: # If only target column (no additional features)
df_scaled = pd.DataFrame(target_scaled, columns=df.columns)
return df_scaled
def inverse_scale(data, target_scaler=target_scaler):
Transform the scaled values of the target back into their original form.
The features are left alone, as we're assuming that the output of the model only includes the target.
Parameters:
data (np.array): Input array
target_scaler (StandardScaler): Scaler used for target
Returns:
data_scaled (np.array): Scaled array
df = pd.DataFrame()
data_scaled = np.empty([data.shape[1], data.shape[0]])
for i in range(data.shape[1]):
data_scaled[i] = target_scaler.inverse_transform([data[:,i]])
return data_scaled.transpose()
df_train_scaled=scale(df_train)
df_test_scaled=scale(df_test, False)
# Review scaled values
df_train_scaled.head()
def reframe(data, n_input_steps = n_input_steps, n_output_steps = n_output_steps, target_col = target_col):
target_col_num = data.columns.get_loc(target_col)
# Iterate through data and create sequences of features and outputs
df = pd.DataFrame(data)
cols=list()
for i in range(n_input_steps, 0, -1):
cols.append(df.shift(i))
for i in range(0, n_output_steps):
cols.append(df.shift(-i))
# Concatenate values and remove any missing values
df = pd.concat(cols, axis=1)
df.dropna(inplace=True)
# Split the data into feature and target variables
n_feature_cols = n_input_steps * n_features
features = df.iloc[:,0:n_feature_cols]
target_cols = [i for i in range(n_feature_cols + target_col_num, n_feature_cols + n_output_steps * n_features, n_features)]
targets = df.iloc[:,target_cols]
return (features, targets)
X_train_reframed, y_train_reframed = reframe(df_train_scaled)
X_test_reframed, y_test_reframed = reframe(df_test_scaled)
# Reshape test data to match model inputs and outputs
X_train = X_train_reframed.values.reshape(-1, n_input_steps, n_features)
X_test = X_test_reframed.values.reshape(-1, n_input_steps, n_features)
y_train = y_train_reframed.values.reshape(-1, n_output_steps)
y_test = y_test_reframed.values.reshape(-1, n_output_steps)
# Specify directories to be used later
TRAINER_DIR = 'trainer'
EXPORT_DIR = 'tf_export'
# Create trainer directory if it doesn't already exist
!mkdir $TRAINER_DIR
# Copy numpy arrays to npy files
np.save(TRAINER_DIR + '/x_train.npy', X_train)
np.save(TRAINER_DIR + '/x_test.npy', X_test)
np.save(TRAINER_DIR + '/y_train.npy', y_train)
np.save(TRAINER_DIR + '/y_test.npy', y_test)
# Write training code out to a file that will be submitted to the training job
# Note: f-strings are supported in Python 3.6 and above
model_template = fimport argparse
import numpy as np
import os
import tempfile
from google.cloud import storage
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.callbacks import EarlyStopping
n_features = {n_features} # Two features: y (previous values) and whether the date is a holiday
n_input_steps = {n_input_steps} # Lookback window
n_output_steps = {n_output_steps} # How many steps to predict forward
epochs = {epochs} # How many passes through the data (early-stopping will cause training to stop before this)
patience = {patience} # Terminate training after the validation loss does not decrease after this many epochs
def download_blob(bucket_name, source_blob_name, destination_file_name):
'''Downloads a blob from the bucket.'''
# bucket_name = "your-bucket-name"
# source_blob_name = "storage-object-name"
# destination_file_name = "local/path/to/file"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
# Construct a client side representation of a blob.
# Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve
# any content from Google Cloud Storage. As we don't need additional data,
# using `Bucket.blob` is preferred here.
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print("Blob " + source_blob_name + " downloaded to " + destination_file_name + ".")
def extract_bucket_and_prefix_from_gcs_path(gcs_path: str):
'''Given a complete GCS path, return the bucket name and prefix as a tuple.
Example Usage:
bucket, prefix = extract_bucket_and_prefix_from_gcs_path(
"gs://example-bucket/path/to/folder"
)
# bucket = "example-bucket"
# prefix = "path/to/folder"
Args:
gcs_path (str):
Required. A full path to a Google Cloud Storage folder or resource.
Can optionally include "gs://" prefix or end in a trailing slash "/".
Returns:
Tuple[str, Optional[str]]
A (bucket, prefix) pair from provided GCS path. If a prefix is not
present, a None will be returned in its place.
'''
if gcs_path.startswith("gs://"):
gcs_path = gcs_path[5:]
if gcs_path.endswith("/"):
gcs_path = gcs_path[:-1]
gcs_parts = gcs_path.split("/", 1)
gcs_bucket = gcs_parts[0]
gcs_blob_prefix = None if len(gcs_parts) == 1 else gcs_parts[1]
return (gcs_bucket, gcs_blob_prefix)
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument(
'--data-uri',
default=None,
help='URL where the training files are located')
args = parser.parse_args()
print(args)
return args
def main():
args = get_args()
bucket_name, blob_prefix = extract_bucket_and_prefix_from_gcs_path(args.data_uri)
# Get the training data and convert back to np arrays
local_data_dir = os.path.join(os.getcwd(), tempfile.gettempdir())
files = ['x_train.npy', 'y_train.npy', 'x_test.npy', 'y_test.npy']
for file in files:
download_blob(bucket_name, os.path.join(blob_prefix,file), os.path.join(local_data_dir,file))
X_train = np.load(local_data_dir + '/x_train.npy')
y_train = np.load(local_data_dir + '/y_train.npy')
X_test = np.load(local_data_dir + '/x_test.npy')
y_test = np.load(local_data_dir + '/y_test.npy')
# Build and train the model
model = Sequential([
LSTM({lstm_units}, input_shape=[n_input_steps, n_features], recurrent_activation=None),
Dense(n_output_steps)])
model.compile(optimizer='adam', loss='mae')
early_stopping = EarlyStopping(monitor='val_loss', patience=patience)
_ = model.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=epochs, callbacks=[early_stopping])
# Export the model
model.save(os.environ["AIP_MODEL_DIR"])
if __name__ == '__main__':
main()
with open(os.path.join(TRAINER_DIR, 'task.py'), 'w') as f:
f.write(model_template.format(**globals()))
# Copy the data files to a GCS bucket
!gsutil -m cp -r trainer/*.npy $BUCKET_URI/$TRAINER_DIR
# List the contents of the bucket to ensure they were copied properly
!gsutil ls $BUCKET_URI/$TRAINER_DIR
# Set training job parameters
CMDARGS = [
f"--data-uri={BUCKET_URI}/{TRAINER_DIR}"
]
TRAIN_VERSION = "tf-cpu.2-6"
DEPLOY_VERSION = "tf2-cpu.2-6"
TRAIN_IMAGE = "us-docker.pkg.dev/vertex-ai/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "us-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(DEPLOY_VERSION)
# Re-run these additional parameters if you need to create a new training job
TIMESTAMP = str(datetime.datetime.now().time())
JOB_NAME = 'vertex_ai_training_' + TIMESTAMP
MODEL_DISPLAY_NAME = MODEL_NAME + TIMESTAMP
# Create and run the training job
job = aiplatform.CustomTrainingJob(
display_name=JOB_NAME,
script_path=f"{TRAINER_DIR}/task.py",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
)
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
)
DEPLOYED_NAME = f"{MODEL_NAME}_deployed-" + TIMESTAMP
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
machine_type="n1-standard-4",
min_replica_count=1,
max_replica_count=1,
traffic_split={"0": 100},
)
# Get predictions for the first test instance
raw_predictions = endpoint.predict(instances=X_test.tolist()).predictions[0]
predicted_values = inverse_scale(np.array([raw_predictions])).round()
actual_values = inverse_scale(np.array([y_test[0]]))
# Print prediction and compare to actual value
print('Predicted riders:', predicted_values)
print('Actual riders: ', actual_values)
delete_training_job = True
delete_model = True
delete_endpoint = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete the training job
job.delete()
# Delete the endpoint
endpoint.delete(force=True)
# Delete the model
model.delete()
# Warning: uncomment this section only if you want to delete the entire bucket
# if delete_bucket and "BUCKET" in globals():
# ! gsutil -m rm -r $BUCKET
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a Cloud Storage bucket
Step2: Load and preview the data
Step3: Process data
Step6: Scale values
Step7: Create sequences of time series data
Step8: Build a model and submit your training job to AI Platform
Step10: Prepare model code
Step11: Submit training job
Step12: Deploy the model
Step13: Get predictions on deployed model
Step14: Cleanup
|
14,457 | <ASSISTANT_TASK:>
Python Code:
import numpy
import toyplot
y = numpy.linspace(0, 1, 20) ** 2
toyplot.scatterplot(y, width=300);
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).plot(y)
canvas.axes(grid=(1, 2, 1)).plot(y, marker="o");
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).plot(y, marker="o", size=40)
canvas.axes(grid=(1, 2, 1)).plot(y, marker="o", size=100);
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).scatterplot(y, marker="x", size=100)
canvas.axes(grid=(1, 2, 1)).scatterplot(y, marker="^", size=100, mstyle={"stroke":toyplot.color.near_black});
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).plot(y, marker="o", size=50, style={"stroke":"darkgreen"})
canvas.axes(grid=(1, 2, 1)).plot(y, marker="o", size=50, mstyle={"stroke":"darkgreen"});
markers = [None, "","|","-","+","x","*","^",">","v","<","s","d","o","oo","o|","o-","o+","ox","o*"]
labels = [repr(marker) for marker in markers]
mstyle = {"stroke":toyplot.color.near_black, "fill":"#feb"}
canvas = toyplot.Canvas(800, 150)
axes = canvas.axes(xmin=-1, show=False)
axes.scatterplot(numpy.repeat(0, len(markers)), marker=markers, mstyle=mstyle, size=200)
axes.text(numpy.arange(len(markers)), numpy.repeat(-0.5, len(markers)), text=labels, fill=toyplot.color.near_black, style={"font-size":"16px"});
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).scatterplot(y, marker={"shape":"|", "angle":45}, size=100)
canvas.axes(grid=(1, 2, 1)).scatterplot(y, marker={"shape":"o", "label":"A"}, size=200, mlstyle={"fill":"white"});
custom_marker = {"shape":"path", "path":"m -165 45 c 8.483 6.576 17.276 11.581 26.38 15.013 c 9.101 3.431 18.562 5.146 28.381 5.146 c 9.723 0 17.801 -1.595 24.235 -4.789 c 6.434 -3.192 11.271 -7.887 14.513 -14.084 c 3.239 -6.194 4.861 -13.868 4.861 -23.02 h 6.72 c 0.19 2.384 0.286 5.054 0.286 8.007 c 0 15.92 -6.244 27.405 -18.73 34.458 c 12.01 -2.002 22.852 -4.74 32.528 -8.222 c 9.673 -3.478 20.231 -8.458 31.669 -14.94 l 4.003 7.148 c -13.346 8.389 -28.359 15.109 -45.038 20.16 c -16.682 5.054 -32.123 7.578 -46.325 7.578 c -13.44 0 -26.451 -2.12 -39.033 -6.363 c -12.583 -4.24 -22.877 -9.937 -30.884 -17.086 c -2.766 -2.667 -4.146 -5.098 -4.146 -7.291 c 0 -1.238 0.476 -2.335 1.43 -3.289 c 0.952 -0.951 2.048 -1.43 3.289 -1.43 c 1.236 0.000995874 3.191 1.002 5.861 3.004 Z m 140.262 -81.355 c 8.579 0 12.868 4.195 12.868 12.582 c 0 7.055 -2.288 13.654 -6.863 19.802 c -4.575 6.148 -10.701 11.059 -18.373 14.727 c -7.674 3.671 -15.942 5.505 -24.807 5.505 c -9.343 0 -17.943 -1.881 -25.808 -5.647 c -7.864 -3.765 -14.083 -8.815 -18.659 -15.156 c -4.575 -6.338 -6.863 -13.176 -6.863 -20.517 c 0 -3.908 1.048 -6.767 3.146 -8.579 c 2.096 -1.81 5.48 -2.716 10.151 -2.716 h 75.208 Z m -30.741 -8.00601 c -2.766 0 -4.839 -0.451 -6.219 -1.358 c -1.383 -0.905 -2.359 -2.549 -2.931 -4.933 c -0.572 -2.381 -0.858 -5.623 -0.858 -9.723 c 0 -6.863 1.477 -14.963 4.432 -24.306 c 0.19 -1.238 0.286 -2.049 0.286 -2.431 c 0 -0.952 -0.382 -1.43 -1.144 -1.43 c -0.857 0 -1.955 0.954 -3.288 2.859 c -5.815 8.579 -9.629 19.351 -11.438 32.313 c -0.478 3.528 -1.383 5.911 -2.717 7.149 c -1.336 1.24 -3.624 1.859 -6.863 1.859 h -11.724 c -4.29 0 -7.27 -0.69 -8.936 -2.073 c -1.669 -1.381 -2.502 -3.979 -2.502 -7.792 c 0 -5.147 1.573 -9.959 4.718 -14.441 c 2.096 -3.239 4.455 -6.005 7.078 -8.292 c 2.621 -2.288 7.696 -5.956 15.227 -11.009 c 4.669 -3.146 8.172 -5.885 10.509 -8.221 c 2.335 -2.335 4.169 -5.076 5.505 -8.221 c 0.858 -2.288 2.145 -3.432 3.86 -3.432 c 1.43 0 2.764 1.336 4.003 4.003 c 1.62 3.242 3.192 5.647 4.718 7.22 c 1.524 1.573 4.669 4.075 9.437 7.506 c 18.301 12.393 27.452 23.402 27.452 33.028 c 0 7.817 -4.576 11.724 -13.726 11.724 h -24.879 Z m 156.705 -2.57399 c 5.812 -8.007 12.152 -12.01 19.016 -12.01 c 2.953 0 5.719 0.764 8.293 2.288 c 2.574 1.526 4.598 3.503 6.076 5.934 c 1.477 2.431 2.217 4.933 2.217 7.506 c 0 4.576 1.381 6.863 4.146 6.863 c 1.43 0 3.479 -0.809 6.146 -2.431 c 3.336 -1.716 6.482 -2.574 9.438 -2.574 c 4.766 0 8.625 1.669 11.582 5.004 c 2.953 3.337 4.432 7.435 4.432 12.296 c 0 5.147 -1.383 8.985 -4.146 11.51 c -2.766 2.527 -7.148 4.028 -13.154 4.504 c -2.859 0.192 -4.695 0.525 -5.504 1.001 c -0.811 0.478 -1.215 1.526 -1.215 3.146 c 0 0.286 0.547 2.194 1.643 5.719 c 1.096 3.527 1.645 6.387 1.645 8.578 c 0 4.004 -1.5 7.245 -4.504 9.723 c -3.002 2.48 -6.887 3.718 -11.652 3.718 c -3.051 0 -8.006 -1.048 -14.869 -3.146 c -2.289 -0.762 -3.861 -1.144 -4.719 -1.144 c -1.43 0 -2.574 0.429 -3.432 1.286 c -0.857 0.858 -1.287 2.051 -1.287 3.575 c 0 1.336 0.715 3.527 2.145 6.576 c 1.145 2.288 1.717 4.433 1.717 6.435 c 0 3.623 -1.525 6.673 -4.576 9.15 c -3.051 2.479 -6.816 3.718 -11.295 3.718 c -3.051 0 -6.959 -1.001 -11.725 -3.003 c -3.812 -1.523 -6.244 -2.287 -7.291 -2.287 c -3.719 0 -5.576 2.812 -5.576 8.436 c 0 14.107 -9.057 21.16 -27.166 21.16 c -10.105 0 -19.588 -2.381 -28.453 -7.148 c -8.865 -4.766 -16.062 -11.39 -21.589 -19.874 c 10.867 -4.955 25.783 -13.916 44.751 -26.88 c 13.248 -8.673 24.043 -15.084 32.385 -19.23 c 8.34 -4.146 17.562 -7.601 27.666 -10.366 c 8.102 -2.381 19.396 -4.526 33.887 -6.434 c 1.047 0 1.572 -0.286 1.572 -0.858 c 0 -1.144 -3.527 -1.716 -10.58 -1.716 c -12.393 0 -25.164 1.908 -38.318 5.719 c -14.68 4.481 -30.883 12.203 -48.613 23.163 c -14.488 8.579 -24.258 14.347 -29.311 17.301 c -5.053 2.955 -8.244 4.789 -9.578 5.504 c -1.335 0.715 -4.099 2.026 -8.293 3.933 c -3.146 -7.625 -4.718 -15.632 -4.718 -24.021 c 0 -6.099 0.809 -11.914 2.431 -17.443 c 1.62 -5.527 3.812 -10.317 6.577 -14.37 c 2.763 -4.05 5.955 -7.196 9.58 -9.437 c 3.621 -2.238 7.48 -3.36 11.58 -3.36 c 4.098 0 8.008 1.669 11.725 5.004 c 2.953 2.766 5.193 4.146 6.721 4.146 c 3.621 0 5.67 -2.953 6.146 -8.864 c 1.811 -16.394 8.959 -24.592 21.447 -24.592 c 3.812 0 7.006 0.979 9.58 2.931 c 2.574 1.955 5.004 5.219 7.291 9.794 c 2.002 3.431 4.291 5.147 6.863 5.147 c 3.61802 -0.00100613 7.909 -3.19301 12.866 -9.58 Z"}
canvas, axes, mark = toyplot.scatterplot(0, 0, size=0.1, marker=custom_marker, color="#004712", width=400);
axes.hlines(0, style={"stroke-width":0.1})
axes.vlines(0, style={"stroke-width":0.1});
x = numpy.linspace(0, 100, 10)
y = (0.1 * x) ** 2
canvas, axes, mark = toyplot.scatterplot(x, y, size=.015, color="#004712", marker=custom_marker, xlabel="Years", ylabel="Oak Tree Population", padding=25, width=600);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Markers can also be added to regular plots to highlight the datums (they are turned-off by default)
Step2: You can use the size argument to control the size of the markers (note that the size argument is treated as an approximation of the area of the marker)
Step3: By default, the markers are small circles, but there are many alternatives
Step4: Note the use of the mstyle argument to override the appearance of the marker in the second example. For line plots, this allows you to style the lines and the markers separately
Step5: So far, we've been using string codes to specify different marker shapes. Here is every builtin marker shape in Toyplot, with their string codes
Step6: There are several items worth noting - first, you can pass a sequence of marker codes to the marker argument, to specify markers on a per-series or per-datum basis. Second, you can pass an empty string or None to produce an invisible marker, if you need to hide a datum or declutter the display. Third, note that several of the marker shapes contain internal details that require a contrasting stroke and fill to be visible.
Step7: Using the full marker specification allows you to control additional parameters such as the marker angle and label. Also note the mlstyle argument which controls the style of the marker label, independently of the marker itself.
Step8: Note that the SVG path must contain only relative coordinates, or the marker will not render correctly. In this example the marker was exported as SVG from a drawing application, the path was run through an online conversion process to convert absolute coordinates to relative coordinates, and the initial "move" (m) command was adjusted to center the graphic. For custom markers, the size argument currently acts as a simple scaling factor on the marker (this may change in the future). Here is an (admittedly silly) example of a custom marker at work
|
14,458 | <ASSISTANT_TASK:>
Python Code:
# Panda will be usefull for quick data parsing
import pandas as pd
import numpy as np
# Small trick to get a larger display
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
import matplotlib.pyplot as pl
%matplotlib inline
import pylab as pl
%pylab inline
pylab.rcParams['figure.figsize'] = (20,7)
pl.rcParams['figure.figsize'] = 20, 7
pl.rcParams['font.family'] = 'sans-serif'
pl.rcParams['font.sans-serif'] = ['DejaVu Sans']
pl.style.available
pl.style.use('ggplot')
# Create random datasets with numpy random module
x = np.arange(50)
y = np.random.rand(50)
#Plot y using default line style and color x is automatically inferred
pl.plot(y)
# Plot x and y without line and purple diamon markers
pl.plot(x, y+1, marker ='d', linewidth=0, color="purple")
# Plot x and y using dotted line and
pl.plot(x, y+2, color = 'dodgerblue', linestyle='--')
# Plot x and y using blue circle markers
pl.plot(x, y+3, color='green', linewidth=2, marker='>', linestyle="-.")
# Plot x and y using blue circle markers
pl.plot(x, y+4, color='green', linewidth=4, marker='o', linestyle="-")
pl.scatter (np.random.randn(200),np.random.randn(200), color="coral")
pl.scatter (np.random.randn(100)+2,np.random.randn(100)+3, color="lightgreen")
pl.scatter (np.random.randn(100)-2,np.random.randn(100)*4, color="dodgerblue")
# Create random datasets with numpy random module
x = np.arange(10)
# If the x coordinates are similar the bar are merged at the same position
h1 = np.random.rand(10)
pl.bar(left=x, height=h1, width=0.2, color="dodgerblue")
# To create a stacked graph, the bottom position of the series need to correspond to the previous series
h2 = np.random.rand(10)
pl.bar(left=x, height=h2, bottom= h1, width=0.2, color="lightblue")
# Offset the x coordinate to add a new series and customize color and aspect
h3 = np.random.rand(10)
pl.bar(left=x+0.2, height=h3, width=0.2, color ='salmon', linewidth=2, edgecolor="red")
# Add yerr bars
h4 = np.random.rand(10)
pl.bar(left=x+0.4, height=h4, width=0.2, color ='green', yerr=np.random.randn(10)/10, ecolor="black")
# Generate a list of 2* 1000 values following a normal distibution
n, bins, patches = pl.hist(x=x, bins=30, histtype='bar')
print (n)
print (bins)
# Generate a list of 2* 1000 values following a normal distibution
# Contrary to the first plot, this time, series are stacked
x = np.random.randn(1000, 2)
n, bins, patches = pl.hist(x=x, bins=30, histtype='barstacked')
# Generate a list of 1000 values following a normal distibution
# The plot is cummulative and step style
x = np.random.randn(1000)
n, bins, patches = pl.hist(x=x, bins=30, histtype='step', cumulative=True)
# Generate a list of 2* 1000 values following a normal distibution
# The plot is rotated to horizontal orientation and represented in stepfilled style
x = np.random.randn(1000)
n, bins, patches = pl.hist(x=x, bins=30, histtype='stepfilled', orientation="horizontal")
# Size of the ploting area
pl.figure(figsize=(15,10))
# Customize X and Y limits
pl.xlim(-1,10)
pl.ylim(-0.5,1.5)
# Add X label, y label and a title
pl.xlabel("this is my x label", fontsize=15)
pl.ylabel("this is my Y label", fontsize=15)
pl.title("this is my title", fontsize=20)
# Add a grid
pl.grid(True, color="grey", linewidth=0.5, linestyle="--")
# finally plot the graphs
pl.plot(np.arange(10), np.random.rand(10), color="coral", marker=">", label = "series1")
pl.plot(np.arange(10), np.random.rand(10), color="dodgerblue", marker="<", label = "series2")
#Add the legend outside of the plotting area
pl.legend(bbox_to_anchor=(1, 1), loc=2, frameon=False, fontsize=15)
pl.figure()
# First plot in the left half
pl.subplot(121)
pl.plot(np.arange(10), np.random.rand(10), label="1")
pl.plot(np.arange(10), np.random.rand(10), label="2")
pl.title("Series1")
pl.legend()
# First plot in the right half
pl.subplot(122)
pl.plot(np.arange(10), np.random.rand(10), label="3")
pl.plot(np.arange(10), np.random.rand(10), label="4")
pl.title("Series2")
pl.legend()
pl.figure(figsize=(15,15))
# First plot in the top left corner
pl.subplot(221)
pl.plot(np.arange(10), np.random.rand(10))
# First plot in the top right corner
#pl.subplot(222)
#pl.plot(np.arange(10), np.random.rand(10))
# First plot in the bottom left corner
plt.subplot(223)
pl.plot(np.arange(10), np.random.rand(10))
# First plot in the bottom right corner
plt.subplot(224)
pl.plot(np.arange(10), np.random.rand(10))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pyplot is the Matplotlib plotting backend and the inline magic to see the graph directly in the notebook
Step2: Or you can use pylab, which simplifies all the calling to matplotlib and numpy a little
Step3: We can define a default size for all plots that will be generated by matplotlib
Step4: Introduction to plotting with matplotlib
Step5: the stylesheet can also be defined by default
Step6: Let's use ggplot style (R style) for this notebook
Step7: Line plot
Step8: Scatter plot
Step9: Bar plot
Step10: Histogram
Step11: Customize the plotting area
Step12: The figure area can also be divided to plot several graphs side by side with the subplot command
|
14,459 | <ASSISTANT_TASK:>
Python Code:
#load packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
pleiades = pd.read_csv('pleiades.csv')
pleiades
pleiades.columns
pleiades.dtypes
pleiades_L = pleiades["Lbol"]
pleiades_T = pleiades["Teff"]
pleiades_L = pleiades_L - 2
pleiades_L_new = pd.to_numeric(pleiades_L, errors='coerce')
pleiades_T_new = pd.to_numeric(pleiades_T, errors='coerce')
# With "coerce", we are telling the to_numeric function to change any invalid entries to NaNs.
pleiades_L[0:10], pleiades_L_new[0:10], pleiades_T[0:10], pleiades_T_new[0:10]
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new)
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'o')
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
ax.set_yscale('log')
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
ax.set_yscale('log')
ax.set_xlim(11000,1000)
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
ax.set_yscale('log')
ax.set_xlim(11000,1000)
plt.title('H-R Diagram for the Pleiades')
plt.xlabel('Temperature (in K)')
plt.ylabel('log(Luminosity (in L$_{\odot}$))')
## Add code here to read in the other three files in the Lab 1 directory, and give them descriptive variable names.
## Add code here to identify the column labels for luminosity and temperature. Be careful - columns may not have
## the same names, and be sure to check the units of the quantities.
## Convert to pandas series and data types, if necessary.
## Plot the data in this cell and the following cells for each sample.
#some fake data
data_x = np.arange(0,100)
data_y = 3*data_x
data_y2 = data_x**2
data_y3 = data_x + 20
data_y4 = np.sqrt(data_x)
# multipanel plot example
fig,((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(10,10))
fig.suptitle('This is a title for my multipanel plot')
ax1.plot(data_x, data_y, 'go')
ax1.set_title('Figure 1 Title')
ax1.set_xlabel('x label')
ax1.set_ylabel('y label')
ax2.plot(data_x, data_y2, 'bo')
ax2.set_title('Figure 2 Title')
ax2.set_xlabel('x label')
ax2.set_ylabel('y label')
ax3.plot(data_x, data_y3, 'ro')
ax3.set_title('Figure 3 Title')
ax3.set_xlabel('x label')
ax3.set_ylabel('y label')
ax4.plot(data_x, data_y4, 'mo')
ax4.set_title('Figure 4 Title')
ax4.set_xlabel('x label')
ax4.set_ylabel('y label')
#overlay plot example
fig,ax = plt.subplots(figsize=(10,10))
plt.title('This is a title for my multipanel plot')
ax.plot(data_x, data_y, 'go', label='legend entry 1', alpha=0.5)
ax.plot(data_x, data_y2, 'bo', label='legend entry 2', alpha=0.5)
ax.plot(data_x, data_y3, 'ro', label='legend entry 2', alpha=0.5)
ax.plot(data_x, data_y4, 'mo', label='legend entry 2', alpha=0.5)
ax.set_title('Figure Title')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
plt.legend(numpoints=1)
#TRY EXECUTING WITH AND WITHOUT THE FOLLOWING LINE. HERE AND IN THE DATA YOU'LL BE PLOTTING,
#A SUBJECTIVE DECISION MUST BE MADE ABOUT AXIS RANGES
ax.set_ylim(0,200)
## In this cell, create your own overlay plot showing the different populations. Hint: You may want to plot the
## sample with the most data points first.
## In this cell, create a multi-panel plot for the different populations.
## Your answers to each of the four questions here.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First off, we'll need to read in the data with the pandas function read_csv. A basic example is given below
Step2: "pleiades" is now a pandas dataframe object, which is essentially a form of python table. To see what's stored in pleiades, execute the cell below. Anywhere you see ..., that means that there are a number of additional columns or rows that have been hidden.
Step3: Perhaps more useful are the pandas .columns and .dtypes methods. Execute the cells below and then edit the descriptions of what each does in the cell below (double click on this text to get into the markdown cell, where you can type regular text)
Step4: The two columns that we care about for this lab are the Temperature (Teff) and the Luminosity (Lbol). The units for these two columns are Kelvin and Solar luminosities, respectively. As you will label these later in your plots, I'll note here that there's a special trick for getting the sun symbol in a Markdown cell using the typesetting system LaTeX.
Step5: Note though from your .dtypes output above that both of these columns have dtype "object", which is not a data type that will allow us to manipulate them. For example, try executing the cell below, where we attempt to subtract the value 2 from each entry in the "pleiades_L" column. You should get an error...
Step6: So we want to convert the type of these pandas series to be numeric, which we do with pandas.to_numeric, as below
Step7: Now let's print the first ten elements of each array to verify that nothing weird happened during this conversion.
Step8: Now let's plot these two quantities against one another. There are many ways to plot in pyplot, but I'll use the one that I find to work most consistently and intuitively below.
Step9: Yikes, that's ugly, right? That's because the default plot symbol is a line connecting all the points. In this case what we really want is a so-called scatterplot, which we can do easily by specifying the plotting marker right after the y variable, as below. Below I use 'o', which stands for the circle symbol. For the full list of matplotlib symbols, see this link.
Step10: Note the default plotting color is blue, but you can change this easily by adding a color shorthand before the marker shorthand. Below I use 'g' for green, but here again, there are lots of options, as outlined at this link.
Step11: Colors notwithstanding, this plot is still very ugly and should not look much like an H-R diagram to you. For one thing, H-R diagrams usually have log(Luminosity) on the y-axis, which you do as follows
Step12: OK that's much nicer, but should still look backwards to you, because we always draw H-R Diagrams with the Temperature axis running from high to low temperature. This is a pretty easy fix too.
Step13: OK! This is starting to look like a good H-R diagram, but without a plot title or axis labels, it's still not very good, so let's add those.
Step14: Exercise 1
Step15: Exercise 2 (Multi-Panel Plots)
Step16: Exercise 3 (Comprehension Questions)
|
14,460 | <ASSISTANT_TASK:>
Python Code:
def topo_sort(T, D):
Parents = { t: set() for t in T } # dictionary of parents
Children = { t: set() for t in T } # dictionary of children
for s, t in D:
Children[s].add(t)
Parents [t].add(s)
Orphans = { t for (t, P) in Parents.items() if len(P) == 0 }
Sorted = []
count = 0
Order = {}
while len(T) > 0:
assert Orphans != set(), 'The graph is cyclic!'
t = Orphans.pop()
Order[t] = count
count += 1
Orphans -= { t }
T -= { t }
Sorted.append(t)
for s in Children[t]:
Parents[s] -= { t }
if Parents[s] == set():
Orphans.add(s)
return Sorted
def topo_sort(T, D):
print('_' * 100)
display(toDot(D))
Parents = { t: set() for t in T } # dictionary of parents
Children = { t: set() for t in T } # dictionary of children
for s, t in D:
Children[s].add(t)
Parents [t].add(s)
Orphans = { t for (t, P) in Parents.items() if len(P) == 0 }
Sorted = []
count = 0
Order = {}
while len(T) > 0:
assert Orphans != set(), 'The graph is cyclic!'
t = Orphans.pop()
Order[t] = count
count += 1
Orphans -= { t }
T -= { t }
Sorted.append(t)
for s in Children[t]:
Parents[s] -= { t }
if Parents[s] == set():
Orphans.add(s)
print('_' * 80)
display(toDot(D, Order))
return Sorted
import graphviz as gv
def toDot(Edges, Order={}):
V = set()
for x, y in Edges:
V.add(x)
V.add(y)
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
dot.attr(rankdir='LR', size='8,5')
for x in V:
o = Order.get(x, None)
if o != None:
dot.node(str(x), label='{' + str(x) + '|' + str(o) + '}')
else:
dot.node(str(x))
for u, v in Edges:
dot.edge(str(u), str(v))
return dot
def demo():
T = { 5, 7, 3, 11, 8, 2, 9, 10 }
D = { (5, 11), (7, 11), (7, 8), (3, 8), (3, 10), (11, 2), (11, 9), (11, 10), (8, 9) }
S = topo_sort(T, D)
print(S)
demo()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphical Representation
Step2: The function toDot(Edges, Order) takes two arguments
Step3: Testing
|
14,461 | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
print('TF version: {}'.format(tf.__version__))
print('TFT version: {}'.format(tft.__version__))
print('TFDV version: {}'.format(tfdv.__version__))
PROJECT = 'cloud-training-demos' # Replace with your PROJECT
BUCKET = 'cloud-training-demos-ml' # Replace with your BUCKET
REGION = 'us-central1' # Choose an available region for Cloud MLE
import os
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
DATA_DIR='gs://cloud-samples-data/ml-engine/census/data'
import os
TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
!gsutil ls -l $TRAIN_DATA_FILE
!gsutil ls -l $EVAL_DATA_FILE
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
TARGET_FEATURE_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt_scaled' # note that you changes the column name in tft
RAW_SCHEMA_LOCATION = 'raw_schema.pbtxt'
PREPROC_OUTPUT_DIR = 'gs://{}/census/tfx'.format(BUCKET) # from 02_transform.ipynb
TRANSFORM_ARTIFACTS_DIR = os.path.join(PREPROC_OUTPUT_DIR,'transform')
TRANSFORMED_DATA_DIR = os.path.join(PREPROC_OUTPUT_DIR,'transformed')
!gsutil ls $TRANSFORM_ARTIFACTS_DIR
!gsutil ls $TRANSFORMED_DATA_DIR
transform_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
def make_input_fn(tfrecords_files,
batch_size, num_epochs=1, shuffle=False):
def input_fn():
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=tfrecords_files,
batch_size=batch_size,
features=transform_output.transformed_feature_spec(),
label_key=TARGET_FEATURE_NAME,
reader=tf.data.TFRecordDataset,
num_epochs=num_epochs,
shuffle=shuffle
)
return dataset
return input_fn
make_input_fn(TRANSFORMED_DATA_DIR+'/train*.tfrecords', 2, shuffle=False)()
import math
def create_feature_columns():
feature_columns = []
transformed_features = transform_output.transformed_metadata.schema._schema_proto.feature
for feature in transformed_features:
if feature.name in [TARGET_FEATURE_NAME, WEIGHT_COLUMN_NAME]:
continue
if hasattr(feature, 'int_domain') and feature.int_domain.is_categorical:
vocab_size = feature.int_domain.max + 1
feature_columns.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
feature.name, num_buckets=vocab_size),
dimension = int(math.sqrt(vocab_size))))
else:
feature_columns.append(
tf.feature_column.numeric_column(feature.name))
return feature_columns
create_feature_columns()
def create_estimator(params, run_config):
feature_columns = create_feature_columns()
estimator = tf.estimator.DNNClassifier(
weight_column=WEIGHT_COLUMN_NAME,
label_vocabulary=TARGET_LABELS,
feature_columns=feature_columns,
hidden_units=params.hidden_units,
config=run_config
)
return estimator
from datetime import datetime
def run_experiment(estimator, params, run_config, resume=False):
tf.logging.set_verbosity(tf.logging.INFO)
if not resume:
if tf.gfile.Exists(run_config.model_dir):
print("Removing previous artifacts...")
tf.gfile.DeleteRecursively(run_config.model_dir)
else:
print("Resuming training...")
train_spec = tf.estimator.TrainSpec(
input_fn = make_input_fn(
TRANSFORMED_DATA_DIR+'/train*.tfrecords',
batch_size=params.batch_size,
num_epochs=None,
shuffle=True
),
max_steps=params.max_steps
)
eval_spec = tf.estimator.EvalSpec(
input_fn = make_input_fn(
TRANSFORMED_DATA_DIR+'/eval*.tfrecords',
batch_size=params.batch_size,
),
start_delay_secs=0,
throttle_secs=0,
steps=None
)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
MODELS_LOCATION = 'models/census'
MODEL_NAME = 'dnn_classifier'
model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)
os.environ['MODEL_DIR'] = model_dir
params = tf.contrib.training.HParams()
params.hidden_units = [128, 64]
params.dropout = 0.15
params.batch_size = 128
params.max_steps = 1000
run_config = tf.estimator.RunConfig(
tf_random_seed=19831006,
save_checkpoints_steps=200,
keep_checkpoint_max=3,
model_dir=model_dir,
log_step_count_steps=10
)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config)
tf.logging.set_verbosity(tf.logging.ERROR)
def make_serving_input_receiver_fn():
from tensorflow_transform.tf_metadata import schema_utils
source_raw_schema = tfdv.load_schema_text(RAW_SCHEMA_LOCATION)
raw_feature_spec = schema_utils.schema_as_feature_spec(source_raw_schema).feature_spec
raw_feature_spec.pop(TARGET_FEATURE_NAME)
if WEIGHT_COLUMN_NAME in raw_feature_spec:
raw_feature_spec.pop(WEIGHT_COLUMN_NAME)
# Create the interface for the serving function with the raw features
raw_features = tf.estimator.export.build_parsing_serving_input_receiver_fn(raw_feature_spec)().features
receiver_tensors = {feature: tf.placeholder(shape=[None], dtype=raw_features[feature].dtype)
for feature in raw_features
}
receiver_tensors_expanded = {tensor: tf.reshape(receiver_tensors[tensor], (-1, 1))
for tensor in receiver_tensors
}
# Apply the transform function
transformed_features = transform_output.transform_raw_features(receiver_tensors_expanded)
return tf.estimator.export.ServingInputReceiver(
transformed_features, receiver_tensors)
export_dir = os.path.join(model_dir, 'export')
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=make_serving_input_receiver_fn
)
%%bash
saved_models_base=${MODEL_DIR}/export/
saved_model_dir=${MODEL_DIR}/export/$(ls ${saved_models_base} | tail -n 1)
echo ${saved_model_dir}
saved_model_cli show --dir=${saved_model_dir} --all
export_dir = os.path.join(model_dir, 'export')
tf.gfile.ListDirectory(export_dir)[-1]
saved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1])
print(saved_model_dir)
print()
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="predict"
)
input = {
'age': [34.0],
'workclass': ['Private'],
'education': ['Doctorate'],
'education_num': [10.0],
'marital_status': ['Married-civ-spouse'],
'occupation': ['Prof-specialty'],
'relationship': ['Husband'],
'race': ['White'],
'gender': ['Male'],
'capital_gain': [0.0],
'capital_loss': [0.0],
'hours_per_week': [40.0],
'native_country':['Mexico']
}
print(input)
print()
output = predictor_fn(input)
print(output)
#%%bash
#MODEL_NAME="census"
#MODEL_VERSION="v1"
#MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/census/dnn_classifier/export/exporter | tail -1)
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
#gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version 1.13
HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],
[0], [0], [0], [''], ['']]
def make_eval_input_receiver_fn():
receiver_tensors = {'examples': tf.placeholder(dtype=tf.string, shape=[None])}
columns = tf.decode_csv(receiver_tensors['examples'], record_defaults=HEADER_DEFAULTS)
features = dict(zip(HEADER, columns))
print(features)
for feature_name in features:
if features[feature_name].dtype == tf.int32:
features[feature_name] = tf.cast(features[feature_name], tf.int64)
features[feature_name] = tf.reshape(features[feature_name], (-1, 1))
transformed_features = transform_output.transform_raw_features(features)
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=features[TARGET_FEATURE_NAME]
)
import tensorflow_model_analysis as tfma
eval_model_dir = os.path.join(model_dir, "export/evaluate")
if tf.gfile.Exists(eval_model_dir):
tf.gfile.DeleteRecursively(eval_model_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=make_eval_input_receiver_fn
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <img valign="middle" src="images/tfx.jpeg">
Step2: 3. Model Training
Step3: 3.2 TFRecords Input Function
Step4: 3.3 Create feature columns
Step5: 3.4 Instantiate and Estimator
Step6: 3.5 Implement train and evaluate experiment
Step7: 3.5 Run experiment
Step8: 3.6 Export the model for serving
Step9: 3.7 Try out saved model
Step10: 3.8 Deploy model to Cloud ML Engine
Step11: 3.9 Export evaluation saved model
|
14,462 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install -U tfx
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import os
# We will create two pipelines. One for schema generation and one for training.
SCHEMA_PIPELINE_NAME = "penguin-tfdv-schema"
PIPELINE_NAME = "penguin-tfdv"
# Output directory to store artifacts generated from the pipeline.
SCHEMA_PIPELINE_ROOT = os.path.join('pipelines', SCHEMA_PIPELINE_NAME)
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
SCHEMA_METADATA_PATH = os.path.join('metadata', SCHEMA_PIPELINE_NAME,
'metadata.db')
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
import urllib.request
import tempfile
DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.
_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_url, _data_filepath)
!head {_data_filepath}
def _create_schema_pipeline(pipeline_name: str,
pipeline_root: str,
data_root: str,
metadata_path: str) -> tfx.dsl.Pipeline:
Creates a pipeline for schema generation.
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# NEW: Computes statistics over data for visualization and schema generation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# NEW: Generates schema based on the generated statistics.
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True)
components = [
example_gen,
statistics_gen,
schema_gen,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
tfx.orchestration.LocalDagRunner().run(
_create_schema_pipeline(
pipeline_name=SCHEMA_PIPELINE_NAME,
pipeline_root=SCHEMA_PIPELINE_ROOT,
data_root=DATA_ROOT,
metadata_path=SCHEMA_METADATA_PATH))
from ml_metadata.proto import metadata_store_pb2
# Non-public APIs, just for showcase.
from tfx.orchestration.portable.mlmd import execution_lib
# TODO(b/171447278): Move these functions into the TFX library.
def get_latest_artifacts(metadata, pipeline_name, component_id):
Output artifacts of the latest run of the component.
context = metadata.store.get_context_by_type_and_name(
'node', f'{pipeline_name}.{component_id}')
executions = metadata.store.get_executions_by_context(context.id)
latest_execution = max(executions,
key=lambda e:e.last_update_time_since_epoch)
return execution_lib.get_artifacts_dict(metadata, latest_execution.id,
[metadata_store_pb2.Event.OUTPUT])
# Non-public APIs, just for showcase.
from tfx.orchestration.experimental.interactive import visualizations
def visualize_artifacts(artifacts):
Visualizes artifacts using standard visualization modules.
for artifact in artifacts:
visualization = visualizations.get_registry().get_visualization(
artifact.type_name)
if visualization:
visualization.display(artifact)
from tfx.orchestration.experimental.interactive import standard_visualizations
standard_visualizations.register_standard_visualizations()
# Non-public APIs, just for showcase.
from tfx.orchestration.metadata import Metadata
from tfx.types import standard_component_specs
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
SCHEMA_METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
# Find output artifacts from MLMD.
stat_gen_output = get_latest_artifacts(metadata_handler, SCHEMA_PIPELINE_NAME,
'StatisticsGen')
stats_artifacts = stat_gen_output[standard_component_specs.STATISTICS_KEY]
schema_gen_output = get_latest_artifacts(metadata_handler,
SCHEMA_PIPELINE_NAME, 'SchemaGen')
schema_artifacts = schema_gen_output[standard_component_specs.SCHEMA_KEY]
# docs-infra: no-execute
visualize_artifacts(stats_artifacts)
visualize_artifacts(schema_artifacts)
import shutil
_schema_filename = 'schema.pbtxt'
SCHEMA_PATH = 'schema'
os.makedirs(SCHEMA_PATH, exist_ok=True)
_generated_path = os.path.join(schema_artifacts[0].uri, _schema_filename)
# Copy the 'schema.pbtxt' file from the artifact uri to a predefined path.
shutil.copy(_generated_path, SCHEMA_PATH)
print(f'Schema at {SCHEMA_PATH}-----')
!cat {SCHEMA_PATH}/*
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# We don't need to specify _FEATURE_KEYS and _FEATURE_SPEC any more.
# Those information can be read from the given schema file.
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int = 200) -> tf.data.Dataset:
Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _build_keras_model(schema: schema_pb2.Schema) -> tf.keras.Model:
Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
# ++ Changed code: Uses all features in the schema except the label.
feature_keys = [f.name for f in schema.feature if f.name != _LABEL_KEY]
inputs = [keras.layers.Input(shape=(1,), name=f) for f in feature_keys]
# ++ End of the changed code.
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
# ++ Changed code: Reads in schema file passed to the Trainer component.
schema = tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema_pb2.Schema())
# ++ End of the changed code.
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model(schema)
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
schema_path: str, module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
Creates a pipeline using predefined schema with TFX.
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Computes statistics over data for visualization and example validation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# NEW: Import the schema.
schema_importer = tfx.dsl.Importer(
source_uri=schema_path,
artifact_type=tfx.types.standard_artifacts.Schema).with_id(
'schema_importer')
# NEW: Performs anomaly detection based on statistics and data schema.
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_importer.outputs['result'])
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
schema=schema_importer.outputs['result'], # Pass the imported schema.
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
# NEW: Following three components were added to the pipeline.
statistics_gen,
schema_importer,
example_validator,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
schema_path=SCHEMA_PATH,
module_file=_trainer_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
ev_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME,
'ExampleValidator')
anomalies_artifacts = ev_output[standard_component_specs.ANOMALIES_KEY]
visualize_artifacts(anomalies_artifacts)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data validation using TFX Pipeline and TensorFlow Data Validation
Step2: Install TFX
Step3: Did you restart the runtime?
Step4: Set up variables
Step5: Prepare example data
Step6: Take a quick look at the CSV file.
Step8: You should be able to see five feature columns. species is one of 0, 1 or 2,
Step9: Run the pipeline
Step12: You should see "INFO
Step13: Now we can examine the outputs from the pipeline execution.
Step14: It is time to examine the outputs from each component. As described above,
Step15: <!-- <img class="tfo-display-only-on-site"
Step16: This schema is automatically inferred from the output of StatisticsGen. You
Step17: The schema file uses
Step21: You should be sure to review and possibly edit the schema definition as
Step23: Now you have completed all preparation steps to build a TFX pipeline for
Step24: Run the pipeline
Step25: You should see "INFO
Step26: ExampleAnomalies from the ExampleValidator can be visualized as well.
|
14,463 | <ASSISTANT_TASK:>
Python Code:
# Useful starting lines
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
1
x = [2,3,4]
def my_function(l):
l.append(12)
my_function(x)
x
# Matplotlib is used for plotting, plots are directly embedded in the
# notebook thanks to the '%matplolib inline' command at the beginning
plt.hist(np.random.randn(10000), bins=40)
plt.xlabel('X label')
plt.ylabel('Y label')
np.multiply
np.zeros(4)
np.eye(3)
np.array([[1,3,4],[2,5,6]])
np.arange(10) # NB : np.array(range(10)) is a slightly more complicated equivalent
np.random.randn(3, 4) # normal distributed values
# 3-D tensor
tensor_3 = np.ones((2, 4, 2))
tensor_3
tensor_3.shape, tensor_3.dtype
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = np.array([[4, 3], [2, 1]])
(b.dtype, a.dtype) # each array has a data type (casting rules apply for int -> float)
np.array(["Mickey", "Mouse"]) # can hold more than just numbers
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = a # Copying the reference only
b[0,0] = 3
a
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = a.copy() # Deep-copy of the data
b[0,0] = 3
a
np.ones((2, 4)) * np.random.randn(2, 4)
np.eye(3) - np.ones((3,3))
print(a)
print(a.shape) # Get shape
print(a.shape[0]) # Get size of first dimension
print(a[0]) # Get first line (slice for the first dimension)
print(a[:, 1]) # Get second column (slice for the second dimension)
print(a[0, 1]) # Get first line second column element
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = np.array([[4, 3], [2, 1]])
v = np.array([0.5, 2.0])
print(a)
print(a.T) # Equivalent : a.tranpose(), np.transpose(a)
print(a.ravel())
c = np.random.randn(4,5)
print(c.shape)
print(c[np.newaxis].shape) # Adding a dimension
print(c.T.shape)
print(c.reshape([10,2]).shape)
print(c)
print(c.reshape([10,2]))
a.reshape((-1, 1)) # a[-1] means 'whatever needs to go there'
np.sum(a), np.sum(a, axis=0), np.sum(a, axis=1) # reduce-operations reduce the whole array if no axis is specified
np.dot(a, b) # matrix multiplication
# Other ways of writing matrix multiplication, the '@' operator for matrix multiplication
# was introduced in Python 3.5
np.allclose(a.dot(b), a @ b)
# For other linear algebra operations, use the np.linalg module
np.linalg.eig(a) # Eigen-decomposition
print(np.linalg.inv(a)) # Inverse
np.allclose(np.linalg.inv(a) @ a, np.identity(a.shape[1])) # a^-1 * a = Id
np.linalg.solve(a, v) # solves ax = v
np.hstack([a, b])
np.vstack([a, b])
np.vstack([a, b]) + v # broadcasting
np.hstack([a, b]) + v # does not work
np.hstack([a, b]) + v.T # transposing a 1-D array achieves nothing
np.hstack([a, b]) + v.reshape((-1, 1)) # reshaping to convert v from a (2,) vector to a (2,1) matrix
np.hstack([a, b]) + v[:, np.newaxis] # equivalently, we can add an axis
r = np.random.random_integers(0, 9, size=(3, 4))
r
r[0], r[1]
r[0:2]
r[1][2] # regular python
r[1, 2] # numpy
r[:, 1:3]
r > 5 # Binary element-wise result
r[r > 5] # Use the binary mask as filter
r[r > 5] = 999 # Modify the corresponding values with a constant
r
# Get the indices where the condition is true, gives a tuple whose length
# is the number of dimensions of the input array
np.where(r == 999)
print(np.where(np.arange(10) < 5)) # Is a 1-tuple
np.where(np.arange(10) < 5)[0] # Accessing the first element gives the indices array
np.where(r == 999, -10, r+1000) # Ternary condition, if True take element from first array, otherwise from second
r[(np.array([1,2]), np.array([2,2]))] # Gets the view corresponding to the indices. NB : iterable of arrays as indexing
numbers = np.random.randn(1000, 1000)
%%timeit # Naive version
my_sum = 0
for n in numbers.ravel():
if n>0:
my_sum += n
%timeit np.sum(numbers > 0)
X = np.random.randn(10000)
%%timeit # Naive version
my_result = np.zeros(len(X))
for i, x in enumerate(X.ravel()):
my_result[i] = 1 + x + x**2 + x**3 + x**4
%timeit 1 + X + X**2 + X**3 + X**4
X = np.random.randn(1000)
from scipy.fftpack import fft
plt.plot(fft(X).real)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notebook Basics
Step2: Numpy Basics
Step3: Creation of arrays
Step4: ndarray basics
Step5: Basic operators are working element-wise (+, -, *, /)
Step6: Accessing elements and slicing
Step7: Changing the shape of arrays
Step8: Reduction operations
Step9: Linear-algebra operations
Step10: Grouping operations
Step11: Working on subset of the elements
Step12: Binary masks
Step13: Working with indices
Step14: Working with arrays, examples
Step15: Compute polynomial for a lot of values
Step16: Scipy
|
14,464 | <ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import numpy as np
from scipy import stats
from functools import partial
import matplotlib.pyplot as plt
# this changes hidden MPL vars:
from mpl_toolkits.mplot3d import Axes3D # noqa
from mne.stats import (spatio_temporal_cluster_1samp_test,
bonferroni_correction, ttest_1samp_no_p)
try:
from sklearn.feature_extraction.image import grid_to_graph
except ImportError:
from scikits.learn.feature_extraction.image import grid_to_graph
print(__doc__)
width = 40
n_subjects = 10
signal_mean = 100
signal_sd = 100
noise_sd = 0.01
gaussian_sd = 5
sigma = 1e-3 # sigma for the "hat" method
threshold = -stats.distributions.t.ppf(0.05, n_subjects - 1)
threshold_tfce = dict(start=0, step=0.2)
n_permutations = 1024 # number of clustering permutations (1024 for exact)
n_src = width * width
connectivity = grid_to_graph(width, width)
# For each "subject", make a smoothed noisy signal with a centered peak
rng = np.random.RandomState(42)
X = noise_sd * rng.randn(n_subjects, width, width)
# Add a signal at the dead center
X[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd
# Spatially smooth with a 2D Gaussian kernel
size = width // 2 - 1
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd ** 2)))
for si in range(X.shape[0]):
for ri in range(X.shape[1]):
X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')
for ci in range(X.shape[2]):
X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')
X = X.reshape((n_subjects, 1, n_src))
T_obs, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold,
connectivity=connectivity,
tail=1, n_permutations=n_permutations)
# Let's put the cluster data in a readable format
ps = np.zeros(width * width)
for cl, p in zip(clusters, p_values):
ps[cl[1]] = -np.log10(p)
ps = ps.reshape((width, width))
T_obs = T_obs.reshape((width, width))
# To do a Bonferroni correction on these data is simple:
p = stats.distributions.t.sf(T_obs, n_subjects - 1)
p_bon = -np.log10(bonferroni_correction(p)[1])
# Now let's do some clustering using the standard method with "hat":
stat_fun = partial(ttest_1samp_no_p, sigma=sigma)
T_obs_hat, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold,
connectivity=connectivity,
tail=1, n_permutations=n_permutations,
stat_fun=stat_fun, buffer_size=None)
# Let's put the cluster data in a readable format
ps_hat = np.zeros(width * width)
for cl, p in zip(clusters, p_values):
ps_hat[cl[1]] = -np.log10(p)
ps_hat = ps_hat.reshape((width, width))
T_obs_hat = T_obs_hat.reshape((width, width))
# Now the threshold-free cluster enhancement method (TFCE):
T_obs_tfce, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold_tfce,
connectivity=connectivity,
tail=1, n_permutations=n_permutations)
T_obs_tfce = T_obs_tfce.reshape((width, width))
ps_tfce = -np.log10(p_values.reshape((width, width)))
# Now the TFCE with "hat" variance correction:
T_obs_tfce_hat, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold_tfce,
connectivity=connectivity,
tail=1, n_permutations=n_permutations,
stat_fun=stat_fun, buffer_size=None)
T_obs_tfce_hat = T_obs_tfce_hat.reshape((width, width))
ps_tfce_hat = -np.log10(p_values.reshape((width, width)))
fig = plt.figure(facecolor='w')
x, y = np.mgrid[0:width, 0:width]
kwargs = dict(rstride=1, cstride=1, linewidth=0, cmap='Greens')
Ts = [T_obs, T_obs_hat, T_obs_tfce, T_obs_tfce_hat]
titles = ['T statistic', 'T with "hat"', 'TFCE statistic', 'TFCE w/"hat" stat']
for ii, (t, title) in enumerate(zip(Ts, titles)):
ax = fig.add_subplot(2, 4, ii + 1, projection='3d')
ax.plot_surface(x, y, t, **kwargs)
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
p_lims = [1.3, -np.log10(1.0 / n_permutations)]
pvals = [ps, ps_hat, ps_tfce, ps_tfce_hat]
titles = ['Standard clustering', 'Clust. w/"hat"',
'Clust. w/TFCE', 'Clust. w/TFCE+"hat"']
axs = []
for ii, (p, title) in enumerate(zip(pvals, titles)):
ax = fig.add_subplot(2, 4, 5 + ii)
plt.imshow(p, cmap='Purples', vmin=p_lims[0], vmax=p_lims[1])
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
axs.append(ax)
plt.tight_layout()
for ax in axs:
cbar = plt.colorbar(ax=ax, shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025)
cbar.set_label('-log10(p)')
cbar.set_ticks(p_lims)
cbar.set_ticklabels(['%0.1f' % p for p in p_lims])
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Construct simulated data
Step3: Do some statistics
Step4: Now let's do some clustering using the standard method.
Step5: Visualize results
|
14,465 | <ASSISTANT_TASK:>
Python Code:
help('learning_lab.03_management_interface')
from importlib import import_module
script = import_module('learning_lab.03_management_interface')
from inspect import getsource
print(getsource(script.main))
print(getsource(script.demonstrate))
run ../learning_lab/03_management_interface.py
from basics.odl_http import http_history
from basics.http import http_history_to_html
from IPython.core.display import HTML
HTML(http_history_to_html(http_history()))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Execution
Step3: HTTP
|
14,466 | <ASSISTANT_TASK:>
Python Code:
# For using the same code in either Python 2 or 3
from __future__ import print_function
## Note: Python 2 users, use raw_input() to get player input. Python 3 users, use input()
from IPython.display import clear_output
def display_board(board):
clear_output()
print(' | |')
print(' ' + board[7] + ' | ' + board[8] + ' | ' + board[9])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[4] + ' | ' + board[5] + ' | ' + board[6])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[1] + ' | ' + board[2] + ' | ' + board[3])
print(' | |')
def player_input():
marker = ''
while not (marker == 'X' or marker == 'O'):
marker = raw_input('Player 1: Do you want to be X or O?').upper()
if marker == 'X':
return ('X', 'O')
else:
return ('O', 'X')
def place_marker(board, marker, position):
board[position] = marker
def win_check(board,mark):
return ((board[7] == mark and board[8] == mark and board[9] == mark) or # across the top
(board[4] == mark and board[5] == mark and board[6] == mark) or # across the middle
(board[1] == mark and board[2] == mark and board[3] == mark) or # across the bottom
(board[7] == mark and board[4] == mark and board[1] == mark) or # down themarkft side
(board[8] == mark and board[5] == mark and board[2] == mark) or # down the middle
(board[9] == mark and board[6] == mark and board[3] == mark) or # down the right side
(board[7] == mark and board[5] == mark and board[3] == mark) or # diagonal
(board[9] == mark and board[5] == mark and board[1] == mark)) # diagonal
import random
def choose_first():
if random.randint(0, 1) == 0:
return 'Player 2'
else:
return 'Player 1'
def space_check(board, position):
return board[position] == ' '
def full_board_check(board):
for i in range(1,10):
if space_check(board, i):
return False
return True
def player_choice(board):
# Using strings because of raw_input
position = ' '
while position not in '1 2 3 4 5 6 7 8 9'.split() or not space_check(board, int(position)):
position = raw_input('Choose your next position: (1-9) ')
return int(position)
def replay():
return raw_input('Do you want to play again? Enter Yes or No: ').lower().startswith('y')
print('Welcome to Tic Tac Toe!')
while True:
# Reset the board
theBoard = [' '] * 10
player1_marker, player2_marker = player_input()
turn = choose_first()
print(turn + ' will go first.')
game_on = True
while game_on:
if turn == 'Player 1':
# Player1's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player1_marker, position)
if win_check(theBoard, player1_marker):
display_board(theBoard)
print('Congratualtions! You have won the game!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a draw!')
break
else:
turn = 'Player 2'
else:
# Player2's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player2_marker, position)
if win_check(theBoard, player2_marker):
display_board(theBoard)
print('Player 2 has won!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a tie!')
break
else:
turn = 'Player 1'
if not replay():
break
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 8
Step9: Step 9
Step10: Step 10
|
14,467 | <ASSISTANT_TASK:>
Python Code:
import sys
python_version = sys.version_info[0]
print("Python Version: ", python_version)
!pip3 install witwidget
import numpy as np
import pandas as pd
import witwidget
from witwidget.notebook.visualization import WitConfigBuilder, WitWidget
# Download our Pandas dataframe and our test features and labels
!gsutil cp gs://mortgage_dataset_files/data.pkl .
!gsutil cp gs://mortgage_dataset_files/x_test.npy .
!gsutil cp gs://mortgage_dataset_files/y_test.npy .
features = pd.read_pickle("data.pkl")
features.head()
features.info()
x_test = np.load("x_test.npy")
y_test = np.load("y_test.npy")
print(x_test)
test_examples = np.hstack((x_test, y_test.reshape(-1, 1)))
# ******** DO NOT RUN THIS CELL ********
# TODO 1
PROJECT_ID = "YOUR_PROJECT_ID"
MODEL_NAME = "YOUR_MODEL_NAME"
VERSION_NAME = "YOUR_VERSION_NAME"
TARGET_FEATURE = "mortgage_status"
LABEL_VOCAB = ["denied", "approved"]
# TODO 1a
config_builder = (
WitConfigBuilder(
test_examples.tolist(), features.columns.tolist() + ["mortgage_status"]
)
.set_ai_platform_model(
PROJECT_ID,
MODEL_NAME,
VERSION_NAME,
adjust_prediction=adjust_prediction,
)
.set_target_feature(TARGET_FEATURE)
.set_label_vocab(LABEL_VOCAB)
)
# TODO 1b
def adjust_prediction(pred):
return [1 - pred, pred]
config_builder = (
WitConfigBuilder(
test_examples.tolist(), features.columns.tolist() + ["mortgage_status"]
)
.set_ai_platform_model(
"wit-caip-demos",
"xgb_mortgage",
"v1",
adjust_prediction=adjust_prediction,
)
.set_target_feature("mortgage_status")
.set_label_vocab(["denied", "approved"])
)
WitWidget(config_builder, height=800)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the mortgage test dataset
Step2: Preview the Features
Step3: Load the test features and labels into numpy arrays
Step4: Let's take a look at the contents of the 'x_test.npy' file. You can see the "array" structure.
Step5: Combine the features and labels into one array for the What-if Tool
Step6: Using the What-if Tool to interpret our model
Step7: Run this cell to load the WIT config builder. NOTE
|
14,468 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1g', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
14,469 | <ASSISTANT_TASK:>
Python Code:
from theano.sandbox import cuda
%matplotlib inline
from imp import reload
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
import keras.backend as K
K.set_image_dim_ordering('th')
batch_size=64
model = vgg_ft(2)
model.load_weights(model_path+'finetune3.h5')
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:]
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(model_path + 'train_convlayer_features.bc', trn_features)
save_array(model_path + 'valid_convlayer_features.bc', val_features)
trn_features = load_array(model_path+'train_convlayer_features.bc')
val_features = load_array(model_path+'valid_convlayer_features.bc')
trn_features.shape
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.00001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
fc_model.save_weights(model_path+'no_dropout.h5')
fc_model.load_weights(model_path+'no_dropout.h5')
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread('cat.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(path+'train', gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.save_weights(model_path + 'aug1.h5')
conv_model.load_weights(model_path + 'aug1.h5')
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(1000, activation='softmax')
]
p=0.6
bn_model = Sequential(get_bn_layers(0.6))
bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5')
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6))
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
bn_model.add(Dense(2,activation='softmax'))
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
bn_model.save_weights(model_path+'bn.h5')
bn_model.load_weights(model_path+'bn.h5')
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final1.h5')
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final2.h5')
final_model.optimizer.lr=0.001
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
bn_model.save_weights(model_path + 'final3.h5')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Are we underfitting?
Step2: ...and load our fine-tuned weights.
Step3: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer
Step4: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
Step5: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
Step6: And fit the model in the usual way
Step7: Reducing overfitting
Step8: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
Step9: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
Step10: Adding data augmentation
Step11: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Step12: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
Step13: Batch normalization
|
14,470 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import oandapy
import configparser
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
config = configparser.ConfigParser()
config.read('../config/config_v1.ini')
account_id = config['oanda']['account_id']
api_key = config['oanda']['api_key']
oanda = oandapy.API(environment="practice",
access_token=api_key)
response = oanda.get_history(instrument="EUR_USD",
granularity="H1",
count = 5000)
res = pd.DataFrame(response['candles'])
res.columns = ['Close_Ask', 'Close_Bid', 'Complete',
'High_Ask', 'High_Bid', 'Low_Ask', 'Low_Bid',
'Open_Ask', 'Open_Bid', 'Time', 'Volume']
res = res.reindex_axis(['Time', 'Open_Bid', 'Open_Ask',
'High_Bid', 'High_Ask', 'Low_Bid',
'Low_Ask', 'Close_Bid', 'Close_Ask',
'Complete', 'Volume'],
axis=1)
df = res[['Time', 'Close_Bid', 'Close_Ask']].copy()
df['rtns'] = res['Close_Bid'].pct_change()
dtrain = df[0:4500].copy()
dtest = df[4500:].copy()
k = range(1,13)
h = 1
for oo in k:
dtrain['signal'] = np.sign(dtrain['rtns'].rolling(oo).sum())
dtrain['strategy_rtn'] = dtrain['signal'].shift(1) * dtrain['rtns']
res = dtrain['strategy_rtn'].dropna().sum()
print('{0:3} {1:>8.4f}'.format(oo, res))
dtest['signal'] = np.sign(dtest['rtns'].rolling(11).sum())
dtest['result'] = dtest['signal'].shift(1) * dtest['rtns']
dtest['result'].dropna().cumsum().plot(figsize=(10,6));
dtest['signal'] = - np.sign(dtest['rtns'].rolling(3).sum())
dtest['result'] = dtest['signal'].shift(1) * dtest['rtns']
dtest['result'].dropna().cumsum().plot(figsize=(10,6));
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Experiment With the Training Data Set
Step2: Vectorized Backtesting With the Test Set - Momentum
Step3: Vectorized Backtesting With the Test Set - Reversion
|
14,471 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
def bayes_table(hypos, prior, likelihood):
Make a table showing a Bayesian update.
table = pd.DataFrame(dict(prior=prior, likelihood=likelihood), index=hypos)
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return table
import numpy as np
def expo_sf(t, lam):
Survival function of the exponential distribution.
return np.exp(-lam * t)
t = 0.9
mu = 1.1
lam = 1/mu
expo_sf(t, lam)
hypos = ['Right way', 'Wrong way']
prior = [1/2, 1/2]
likelihood = [expo_sf(t, lam), 1]
bayes_table(hypos, prior, likelihood)
from sympy import symbols, exp
t, lam, p, q, r = symbols('t lam p q r')
likelihood = [exp(-lam * t), 1]
likelihood
prior = [p, q]
table = bayes_table(hypos, prior, likelihood)
table
expr = table.loc['Right way', 'posterior']
expr.simplify()
def logistic(p, lam, t):
q = 1-p
return p / (p + q * np.exp(lam * t))
import matplotlib.pyplot as plt
ts = np.linspace(0, 4)
ps = logistic(p=0.5, lam=1/mu, t=ts)
plt.plot(ts, ps)
plt.xlabel("How long you've been trying (seconds)")
plt.ylabel("Probability the orientation is right");
from sympy import Eq, solve
eqn = Eq(expr, r)
eqn
solve(eqn, t)[0]
def wait_time(p, lam, r):
q = 1-p
prior_odds = p / q
posterior_odds = r / (1-r)
return np.log(prior_odds / posterior_odds) / lam
rs = np.linspace(0.05, 0.5)
ts = wait_time(p=0.5, lam=1/mu, r=rs)
plt.plot(rs, ts, color='C2')
plt.xlabel("Probability the orientation is right")
plt.ylabel("How long to keep trying (seconds)");
def simulate(correct, p, lam, r, flip, trace):
# figure out the maximum time we should try before flipping
wait = wait_time(p, lam, r)
# if we're on the correct side, see if we succeed before time's up
if correct:
t = np.random.exponential(1/lam)
if t < wait:
# if so, update and return the trace
return trace + [t]
# if time expired, add the wait time and flip time to the trace
# and make a recursive call to continue the simulation
return simulate(not correct, 1-r, lam, r, flip, trace + [wait, flip])
simulate(correct=True, p=0.5, lam=1/mu, r=0.2, flip=0.1, trace=[])
simulate(correct=False, p=0.5, lam=1/mu, r=0.2, flip=0.1, trace=[])
def run_simulations(lam, r, flip, iters=20000, flag=None):
res = []
for i in range(iters):
correct = i%2 if flag is None else flag
trace = simulate(correct, 0.5, lam, r, flip, [])
res.append((len(trace), sum(trace)))
return np.transpose(res)
lengths, totals = run_simulations(lam=1/mu, r=0.25, flip=0.1)
totals.mean()
rs = np.linspace(0.15, 0.4, 21)
rs
np.random.seed(17)
res = []
for r in rs:
lengths, totals = run_simulations(lam=1/mu, r=r, flip=0.1)
res.append((r, totals.mean()))
from statsmodels.nonparametric.smoothers_lowess import lowess
def make_lowess(series):
Use LOWESS to compute a smooth line.
series: pd.Series
returns: pd.Series
endog = series.values
exog = series.index.values
smooth = lowess(endog, exog)
index, data = np.transpose(smooth)
return pd.Series(data, index=index)
def plot_series_lowess(series, color):
Plots a series of data points and a smooth line.
series: pd.Series
color: string or tuple
series.plot(lw=0, marker='o', color=color, alpha=0.5)
smooth = make_lowess(series)
smooth.plot(label='_', color=color)
rs, ts = np.transpose(res)
series = pd.Series(ts, rs)
plot_series_lowess(series, 'C1')
plt.xlabel("Threshold probability where you flip (r)")
plt.ylabel("Average total duration (seconds)");
r_opt = 0.3
wait_time(p=0.5, lam=1/mu, r=r_opt)
wait_time(p=1-r_opt, lam=1/mu, r=r_opt)
lengths1, totals1 = run_simulations(lam=1/mu, r=r_opt, flip=0.1, flag=True)
lengths2, totals2 = run_simulations(lam=1/mu, r=r_opt, flip=0.1, flag=False)
try:
import empiricaldist
except ImportError:
!pip install empiricaldist
from empiricaldist import Cdf
Cdf.from_seq(totals1).plot(lw=2, label='Right the first time')
Cdf.from_seq(totals2).plot(lw=2, label='Wrong the first time')
plt.xlabel('Total time to connect (seconds)')
plt.ylabel('CDF')
plt.title('Distribution of total time to connect')
plt.legend();
totals1.mean(), totals2.mean()
np.append(totals1, totals2).mean()
from empiricaldist import Pmf
flips1 = (lengths1-1) // 2
pmf1 = Pmf.from_seq(flips1) / 2
pmf1.bar(alpha=0.7, label='Right the first time')
flips2 = (lengths2-1) // 2
pmf2 = Pmf.from_seq(flips2) / 2
pmf2.bar(alpha=0.7, label='Right the second time')
plt.xlabel('How many times you have to flip')
plt.ylabel('PMF')
plt.title('Distribution of number of flips')
plt.legend();
lengths = np.append(lengths1, lengths2)
flips = (lengths-1) // 2
Pmf.from_seq(flips).head(5)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flipping USB Connectors
Step3: Now suppose that the prior probability is 0.5 that the orientation of the connector is correct, and you have been trying for 0.9 seconds.
Step4: We can use this function to compute the likelihood of trying for 0.9 seconds or more, given an exponential distribution with mean 1.1.
Step5: The result is the likelihood of the data, given that the orientation of the connector is correct.
Step6: And here is the likelihood of the data for each hypothesis
Step7: Putting it together, here's the Bayes table.
Step8: After 0.9 seconds, the probability is about 69% that the orientation of the connector is wrong, so you might want to think about trying the other side.
Step9: Here's the likelihood again, using the symbols.
Step10: And here's the Bayes table, using $p$ and $q$ for the prior probabilities of the hypotheses.
Step11: From the table I'll select the posterior probability that the orientation is correct.
Step12: You might recognize this as a form of the logistic function; we can compute it like this
Step13: Let's see what that looks like for a range of values of t, assuming that the prior probability is p=0.5.
Step14: After a few seconds of fiddling, you should be reasonably convinced that the orientation is wrong.
Step15: And here's the solution for t in terms of p, q, r, and lam.
Step16: And here's how we can express this solution in terms of the prior and posterior odds.
Step17: Let's see what that looks like for a range of values of r, assuming that the prior probability is p=0.5.
Step18: When the threshold is low, we have to wait a few seconds to reach it. As the threshold increases, the time to reach it decreases.
Step19: Here's a test run, starting on the correct side.
Step20: And here's a run where we start on the wrong side.
Step21: The following function runs the simulation many times with initial probability p=0.5, starting in the right orientation half the time.
Step22: Here's the average total duration with threshold probability r=0.25.
Step25: With this threshold, it takes about 2 seconds to connect, on average.
Step26: Here's what the results look like.
Step27: The optimal value of r is close to 0.3. With that threshold we can see how long we should try on the first side, starting with prior probability p=0.5.
Step28: With the given values of lam and flip, it turns out the optimal time to wait is about 0.9 seconds.
Step29: How many flips?
Step30: Here's the distribution of total time, represented as a CDF.
Step31: The average is about 2.4 seconds, but occasionally it takes much longer!
|
14,472 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
14,473 | <ASSISTANT_TASK:>
Python Code:
mu = [2, 3]
cov = [[1, 0], [0, 1]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show()
mu = [2, 3]
cov = [[2, -1],[2, 4]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 경우 2
|
14,474 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from ecell4.prelude import *
D = 1
radius = 0.005
N_A = 60
U = 0.5
ka_factor = 0.1 # 0.1 is for reaction-limited
N = 20 # a number of samples
import numpy
kD = 4 * numpy.pi * (radius * 2) * (D * 2)
ka = kD * ka_factor
kd = ka * N_A * U * U / (1 - U)
kon = ka * kD / (ka + kD)
koff = kd * kon / ka
y0 = {'A': N_A, 'B': N_A}
duration = 3
opt_kwargs = {'legend': True}
with species_attributes():
A | B | C | {'radius': radius, 'D': D}
with reaction_rules():
A + B == C | (kon, koff)
m = get_model()
ret1 = run_simulation(duration, y0=y0, model=m)
ret1.plot(**opt_kwargs)
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver='gillespie', repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('meso', Integer3(4, 4, 4)), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
with species_attributes():
A | B | C | {'radius': radius, 'D': D}
with reaction_rules():
A + B == C | (ka, kd)
m = get_model()
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('spatiocyte', radius), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('egfrd', Integer3(4, 4, 4)), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parameters are given as follows. D, radius, N_A, U, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A and B, a ratio of dissociated form of A at the steady state, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second.
Step2: Calculating optimal reaction rates. ka and kd are intrinsic, kon and koff are effective reaction rates.
Step3: Start with no C molecules, and simulate 3 seconds.
Step4: Make a model with effective rates. This model is for macroscopic simulation algorithms.
Step5: Save a result with ode as obs, and plot it
Step6: Simulating with gillespie (Bars represent standard error of the mean)
Step7: Simulating with meso
Step8: Make a model with intrinsic rates. This model is for microscopic (particle) simulation algorithms.
Step9: Simulating with spatiocyte. voxel_radius is given as radius
Step10: Simulating with egfrd
|
14,475 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
PROJ_ROOT = os.path.join(os.pardir, os.pardir)
## Try adding parameter index=0
pump_data_path = os.path.join(PROJ_ROOT,
"data",
"raw",
"pumps_train_values.csv")
df = pd.read_csv(pump_data_path, index=0)
df.head(1)
pd.read_csv?
# Tab completion for parsing dates in the date_recoreded column
# Shift tab for documentation
df = pd.read_csv("../data/water-pumps.csv", index_col=0)
df.head(1)
df.describe()
## Paste for 'construction_year' and plot
## Paste for 'gps_height' and plot
plot_data = df['amount_tsh']
sns.kdeplot(plot_data, bw=1000)
plt.show()
def kde_plot(dataframe, variable, upper=None, lower=None, bw=0.1):
Plots a density plot for a variable with optional upper and
lower bounds on the data (inclusive).
plot_data = dataframe[variable]
if upper is not None:
plot_data = plot_data[plot_data <= upper]
if lower is not None:
plot_data = plot_data[plot_data >= lower]
sns.kdeplot(plot_data, bw=bw)
plt.show()
kde_plot(df, 'amount_tsh', bw=1000, lower=0)
kde_plot(df, 'construction_year', bw=1, lower=1000, upper=2016)
kde_plot(df, 'gps_height', bw=100)
# add local python functions
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(PROJ_ROOT, "src")
sys.path.append(src_dir)
# import my method from the source code
from features.build_features import remove_invalid_data
df = remove_invalid_data(pump_data_path)
df.shape
# TRY ADDING print("lalalala") to the method
df = remove_invalid_data(pump_data_path)
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
import os
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
sys.path.append(src_dir)
# import my method from the source code
%aimport features.build_features
from features.build_features import remove_invalid_data
df = remove_invalid_data(pump_data_path)
df.head()
kde_plot(df,
'date_recorded',
upper=pd.to_datetime('2017-01-01'),
lower=pd.to_datetime('1900-01-01'))
%debug
# "1" turns pdb on, "0" turns pdb off
%pdb 1
kde_plot(df, 'date_recorded')
# turn off debugger
%pdb 0
import numpy as np
from mcmc.hamiltonian import hamiltonian, run_diagnostics
f = lambda X: np.exp(-100*(np.sqrt(X[:,1]**2 + X[:,0]**2)- 1)**2 + (X[:,0]-1)**3 - X[:,1] - 5)
# potential and kinetic energies
U = lambda q: -np.log(f(q))
K = lambda p: p.dot(p.T) / 2
# gradient of the potential energy
def grad_U(X):
x, y = X[0,:]
xy_sqrt = np.sqrt(y**2 + x**2)
mid_term = 100*2*(xy_sqrt - 1)
grad_x = 3*((x-1)**2) - mid_term * ((x) / (xy_sqrt))
grad_y = -1 - mid_term * ((y) / (xy_sqrt))
return -1*np.array([grad_x, grad_y]).reshape(-1, 2)
ham_samples, H = hamiltonian(2500, U, K, grad_U)
run_diagnostics(ham_samples)
%prun ham_samples, H = hamiltonian(2500, U, K, grad_U)
run_diagnostics(ham_samples)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3.1 No more docs-guessing
Step3: 3.2 No more copy-pasta
Step4: 3.3 No more copy-pasta between notebooks
Step5: Restart the kernel, let's try this again....
Step6: 3.4 I'm too good! Now this code is useful to other projects!
Step7: #lifehack
|
14,476 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import display, HTML
display(HTML('''<img src="image1.png",width=800,height=600>'''))
import numpy as np # numerical libraries
import pandas as pd # for data analysis
import matplotlib as mpl # a big library with plotting functionality
import matplotlib.pyplot as plt # a subset of matplotlib with most of the useful tools
import IPython as IP
%matplotlib inline
import pdb
from sklearn import linear_model as lm
odds= pd.read_pickle('../data/pickle_files/odds.pkl')
matches= pd.read_pickle('../data/pickle_files/matches.pkl')
data = pd.merge(matches,odds[['PSW','PSL','key_o']].dropna(axis=0,subset=["PSW"]),how='inner',on='key_o')
data = data[~data.winner_rank_points.isnull() & ~data.loser_rank_points.isnull()]
IP.display.display(data[0:3])
data['year'] = data['tourney_date'].map(lambda x: x.year)
training = data[data.year.isin([2010,2011,2012])]
validation = data[data.year.isin([2013,2014])]
test = data[data.year.isin([2015,2016])]
# consider rank difference to be positive if winner higher ranked, otherwise negative
rank_diff = (training['winner_rank_points'] - training['loser_rank_points']).values
# if higher ranked player won, raw rank was a successful predictor
y = (rank_diff > 0)*1
# predictions done *before* the match, so algorithm operates on absolute value of rank difference
X = np.abs(rank_diff)
# for numerical well-behavedness, we need to scale and center the data
X=(X/np.std(X,axis=0))
lr = lm.LogisticRegression(C=1., solver='lbfgs')
lr.fit(X.reshape(len(X),-1),y*1)
cofs = lr.coef_[0]
# define figure and axes
fig = plt.figure(figsize=(15,5))
ax0 = fig.add_subplot(131)
ax1 = fig.add_subplot(132)
ax2 = fig.add_subplot(133)
# figure A: predicted probabilities vs. empirical probs
hist, bin_edges = np.histogram(X,bins=100)
p = [np.sum(y[np.where((X>=bin_edges[i]) & (X<bin_edges[i+1]))[0]])/np.max([hist[i],1]) for i in np.arange(len(bin_edges)-1)]
bar_pos = np.arange(len(p))
bar_width = np.diff(bin_edges)
ax0.bar(bin_edges[0:-1], p, width=bar_width, align='edge', alpha=0.5)
r = np.arange(X.min(),X.max(),.1)
s = 1/(1+np.exp(-cofs[0]*r))
ax0.plot(r,s,'r')
ax0.set_xlabel('Scaled rank difference',fontsize=12)
ax0.set_ylabel('Probability that higher ranked wins',fontsize=12)
ax0.set_title('Logistic fit to empirical probabilities',fontsize=12)
ax0.legend(['Logistic probability curve','Empirical probability hist.'])
# figure B: probabilities predicted by odds market
ProbW = 1/training.PSW
ProbL = 1/training.PSL
idx = (training.winner_rank_points>training.loser_rank_points)
odds_prob=np.where(idx,ProbW,ProbL)
t = pd.DataFrame({'X':X,'odds_prob':odds_prob})
ts = t.sort_values('X')
ax1.plot(ts['X'],ts['odds_prob'],'.b')
ax1.plot(r,s,'r')
ax1.set_xlabel('Scaled rank difference',fontsize=12)
ax1.set_ylabel('Probability higher ranked wins',fontsize=12)
ax1.set_title('Probabilities implied by odds market.',fontsize=12)
ax1.legend(['Odds market probabilities','Logistic probability curve'])
# Fig C: variance in odds probabilities as a function of rank difference
x_odds = ts['X'].values.reshape(len(ts),-1)
y_odds = ts['odds_prob'].values
hist, bin_edges = np.histogram(x_odds,bins=10)
stds = [np.std(y_odds[np.where((X>=bin_edges[i]) & (X<bin_edges[i+1]))]) for i in np.arange(len(bin_edges)-1)]
reg = lm.LinearRegression()
reg.fit (bin_edges[0:-1].reshape(10,1),stds)
yv=reg.predict(bin_edges[0:-1].reshape(10,1))
ax2.plot(bin_edges[0:-1],stds,'*b')
ax2.plot(bin_edges[0:-1],yv,'r')
ax2.set_xlabel('Scaled rank difference',fontsize=12)
ax2.set_ylabel('Stdev of market prob.',fontsize=12)
ax2.set_title('Trends in stdev of implied probabilities',fontsize=12)
ax2.legend(['Stdev of binned market-probs.','Regression line'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: Load data and take a peak at it.
Step3: Separate data into training, validation, and test sets. (This division is not used for the plot above, but will be critical in assessing the performance of our learning algorithms.)
Step4: Define each match as a 1 or a 0, depending on whether the higher ranked player won.
Step5: Perform 1-D logistic regression on training data.
Step6: Produce the plots
|
14,477 | <ASSISTANT_TASK:>
Python Code:
import psycopg2
from configparser import ConfigParser
from pandas import DataFrame
from collections import Counter
cfg = ConfigParser()
cfg.read("db.cfg")
knst = psycopg2.connect(host=cfg['db']['host'], port=cfg['db']['port'],
database=cfg['db']['db'], user=cfg['db']['user'],
password=cfg['db']['pwd'])
knst.set_client_encoding('UTF-8')
cur = knst.cursor()
sql =
SELECT pr.id, f.name_nl, pe.full_name
FROM production.productions pr
JOIN production.seasons s
ON pr.season_id = s.id
JOIN production.relationships r
ON pr.id = r.production_id
JOIN production.people pe
ON r.person_id = pe.id
JOIN production.functions f
ON r.function_id = f.id
WHERE s.start_year >= 2010
cur.execute(sql)
os = cur.fetchall()
df = DataFrame([o for o in os], columns=["productie_id", "functie", "persoon"])
def get_counted_functions_for_person(persoon):
functies = Counter(df[df["persoon"] == persoon]["functie"]).most_common()
totaal_aantal_functies = sum([f[1] for f in functies])
functies_percentage = [(f[0], f[1] / float(totaal_aantal_functies)) for f in functies]
return functies_percentage, totaal_aantal_functies
lines = []
for persoon in set(df["persoon"].values):
functies, totaal = get_counted_functions_for_person(persoon)
for f in functies:
lijn = [persoon, totaal, f[0], f[1]]
lines.append(lijn)
df_functies = DataFrame(lines, columns=["persoon", "totaal aantal functies", "functie", "percentage"])
df_functies.head()
df_functies[df_functies["totaal aantal functies"] == max(df_functies["totaal aantal functies"])]
df_functies.to_excel("functies.xlsx")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We moeten verbinding maken met de databank.
Step3: De SQL om de gegevens op te halen is niet zo moeilijk
Step4: Even de gegevens binnenhalen en in een pandas dataframe stoppen.
Step5: Een kleine functie om de functies van een persoon op te halen en het totaal.
Step6: Loopen doorheen alle personen.
Step7: Het resultaat ziet er als volgt uit
Step8: Als voorbeeld even de persoon met het meeste aantal functies.
Step9: Zelf verder analyseren? Download de excel file
|
14,478 | <ASSISTANT_TASK:>
Python Code:
# Initialization
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
%matplotlib inline
plt.style.use('fivethirtyeight')
def logprofile(z,ust):
''' Return u as function of z(array) and u_star
Uses Charnock relation for wind-wave interactions'''
z0 = 0.011/9.81*ust # Charnock
return ust/0.4*np.log(z/z0)
# Create artifical wind profile
ust = 0.25
z = np.arange(0,305,5)+5
u = logprofile(z,ust)+np.random.normal(0,.1,len(z))
# Create an ellipse that visualizes the rotor disk in the figure
rotor = Ellipse(xy=(6.5,100), width=.5, height=150, angle=0,alpha=.3)
# Create the figure
fig,ax = plt.subplots()
ax.plot(u,z)
ax.add_artist(rotor)
ax.annotate('Rotor plane',(6.5,100),(6.7,200),arrowprops=dict(facecolor='black',width=2,headwidth=8),fontsize=16)
ax.fill_between(ax.get_xlim(),175,25,color='g',alpha=.2)
ax.set_xlabel('Wind speed (m/s)')
ax.set_ylabel('Altitude (m)')
plt.show()
def power(rho,r,u):
'''Return total wind power in MW as function of air density, rotor radius and wind speed at hub height'''
return .5 * rho * np.pi * r**2 * u**3 / 1e6
r = 75.
u = 8.
rho = 1.2
print 'Power estimate: %.2f MW'%power(rho,r,u)
for u in [7,8,9]:
print 'Wind speed: %.1f m/s, wind power: %.2f MW'%(u,power(rho,r,u))
for ust in [.2,.25,.3]:
u = logprofile(100,ust)
p = power(rho,r,u)
print 'u*: %.2f, wind speed: %.2f m/s, wind power: %.2f MW'%(ust,u,p)
# Vertical levels
dz = 1
z = np.arange(0,300+dz,dz)+dz
# Turbine characteristics
r = 75 # rotor radius
h = 100 # hub height
x = np.where(np.abs(z-h)<r,np.sqrt(r**2-(z-h)**2),0) # the error is only due to the construction of `np.where`
# Logprofile
ust = 0.25
u = logprofile(z,ust)+np.random.normal(0,.1,len(z))
# Energy
rho = 1.2
es = sum(u**3*x)*rho*dz*1e-6 #sophisticated method
eb = .5 * rho * np.pi * r**2 * u[h]**3 / 1e6 #basic method
print 'Wind power with basic formula: %.2f MW'%eb
print 'Wind power with sophicsticated formula: %.2f MW'%es
print 'Difference: %.2f MW'%(es-eb)
# Vertical levels
dz = 1
z = np.arange(0,300+dz,dz)+dz
# Turbine characteristics
r = 75 # rotor radius
h = 100 # hub height
x = np.where(np.abs(z-h)<r,np.sqrt(r**2-(z-h)**2),0) # the error is only due to the construction of `np.where`
# Store the output in these lists:
output_basic = []
output_sophisticated = []
# Perform 10000 load calculations
for i in range(10000):
# Logprofile
ust = 0.25
u = logprofile(z,ust)+np.random.normal(0,.1,len(z))
# Energy
rho = 1.2
es = sum(u**3*x)*rho*dz*1e-6
output_sophisticated.append(es)
eb = .5 * rho * np.pi * r**2 * u[h]**3 / 1e6
output_basic.append(eb)
# Some statistics
output_sophisticated = np.asarray(output_sophisticated)
print 'Sophisticated method:'
print 'Mean power: %.2f MW'%output_sophisticated.mean()
print 'Standard deviation: %.4f MW'%output_sophisticated.std()
output_basic = np.asarray(output_basic)
print '\nBasic method:'
print 'Mean power: %.2f MW'%output_basic.mean()
print 'Standard deviation: %.4f MW'%output_basic.std()
errors = output_basic-output_sophisticated
print '\nError statistics:'
print 'Mean absolute error: %.2f MW'%(np.mean(np.abs(errors)))
print 'Root mean squar error: %.2f MW' %(np.sqrt(np.mean(errors*errors)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic power estimate
Step2: With this simple set-up it is easy to see that a small difference in wind speed translates to a large difference in wind energy
Step3: and that an accurate estimate of $u_*$ is essential
Step4: A more sophisticated approach
Step5: A simple uncertainty analysis
|
14,479 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import openpathsampling as paths
import openpathsampling.engines.openmm as peng_omm
from simtk.openmm import app
import simtk.openmm as mm
import simtk.unit as unit
from openmmtools.integrators import VVVRIntegrator
import mdtraj as md
import numpy as np
# this cell is all OpenMM specific
forcefield = app.ForceField('amber96.xml', 'tip3p.xml')
pdb = app.PDBFile("../resources/AD_initial_frame.pdb")
system = forcefield.createSystem(
pdb.topology,
nonbondedMethod=app.PME,
nonbondedCutoff=1.0*unit.nanometers,
constraints=app.HBonds,
rigidWater=True,
ewaldErrorTolerance=0.0005
)
hi_T_integrator = VVVRIntegrator(
500*unit.kelvin,
1.0/unit.picoseconds,
2.0*unit.femtoseconds)
hi_T_integrator.setConstraintTolerance(0.00001)
template = peng_omm.snapshot_from_pdb("../resources/AD_initial_frame.pdb")
openmm_properties = {'OpenCLPrecision': 'mixed'}
engine_options = {
'n_frames_max': 2000,
'nsteps_per_frame': 10
}
hi_T_engine = peng_omm.Engine(
template.topology,
system,
hi_T_integrator,
openmm_properties=openmm_properties,
options=engine_options
)
hi_T_engine.name = '500K'
hi_T_engine.current_snapshot = template
hi_T_engine.minimize()
# define the CVs
psi = paths.MDTrajFunctionCV("psi", md.compute_dihedrals, template.topology, indices=[[6,8,14,16]])
phi = paths.MDTrajFunctionCV("phi", md.compute_dihedrals, template.topology, indices=[[4,6,8,14]])
# define the states
deg = 180.0/np.pi
C_7eq = (paths.PeriodicCVDefinedVolume(phi, lambda_min=-180/deg, lambda_max=0/deg,
period_min=-np.pi, period_max=np.pi) &
paths.PeriodicCVDefinedVolume(psi, lambda_min=100/deg, lambda_max=200/deg,
period_min=-np.pi, period_max=np.pi)
).named("C_7eq")
# similarly, without bothering with the labels:
alpha_R = (paths.PeriodicCVDefinedVolume(phi, -180/deg, 0/deg, -np.pi, np.pi) &
paths.PeriodicCVDefinedVolume(psi, -100/deg, 0/deg, -np.pi, np.pi)).named("alpha_R")
init_traj_ensemble = paths.AllOutXEnsemble(C_7eq) | paths.AllOutXEnsemble(alpha_R)
# generate trajectory that includes frame in both states
trajectory = hi_T_engine.generate(hi_T_engine.current_snapshot, [init_traj_ensemble])
# create a network so we can use its ensemble to obtain an initial trajectory
# use all-to-all because we don't care if initial traj is A->B or B->A: it can be reversed
tmp_network = paths.TPSNetwork.from_states_all_to_all([C_7eq, alpha_R])
# take the subtrajectory matching the ensemble (only one ensemble, only one subtraj)
subtrajectories = []
for ens in tmp_network.analysis_ensembles:
subtrajectories += ens.split(trajectory)
print subtrajectories
plt.plot(phi(trajectory), psi(trajectory), 'k.-')
plt.plot(phi(subtrajectories[0]), psi(subtrajectories[0]), 'r')
integrator = VVVRIntegrator(
300*unit.kelvin,
1.0/unit.picoseconds,
2.0*unit.femtoseconds
)
integrator.setConstraintTolerance(0.00001)
engine = peng_omm.Engine(
template.topology,
system,
integrator,
openmm_properties=openmm_properties,
options=engine_options
)
engine.name = '300K'
network = paths.TPSNetwork(initial_states=C_7eq, final_states=alpha_R)
scheme = paths.OneWayShootingMoveScheme(network,
selector=paths.UniformSelector(),
engine=engine)
# make subtrajectories into initial conditions (trajectories become a sampleset)
initial_conditions = scheme.initial_conditions_from_trajectories(subtrajectories)
# check that initial conditions are valid and complete (raise AssertionError otherwise)
scheme.assert_initial_conditions(initial_conditions)
sampler = paths.PathSampling(storage=paths.Storage("tps_nc_files/alanine_dipeptide_tps_equil.nc", "w", template),
move_scheme=scheme,
sample_set=initial_conditions)
sampler.live_visualizer = paths.StepVisualizer2D(network, phi, psi, [-3.14, 3.14], [-3.14, 3.14])
# initially, these trajectories are correlated (actually, identical)
# once decorrelated, we have a (somewhat) reasonable 300K trajectory
initial_conditions[0].trajectory.is_correlated(sampler.sample_set[0].trajectory)
# this is a trick to take the first decorrelated trajectory
while (initial_conditions[0].trajectory.is_correlated(sampler.sample_set[0].trajectory)):
sampler.run(1)
# run an extra 10 to decorrelate a little futher
sampler.run(10)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting up the engine
Step2: The storage file will need a template snapshot. In addition, the OPS OpenMM-based Engine has a few properties and options that are set by these dictionaries.
Step3: Defining states
Step4: Getting a first trajectory
Step5: Plotting the trajectory
Step6: Setting up another engine
Step7: Equilibrate TPS
|
14,480 | <ASSISTANT_TASK:>
Python Code:
Image('./res/fig3_1.png')
# Transition Graph
Image('./res/ex3_3.png')
# Example 3.5
from scipy.signal import convolve2d
reward_matrix = np.zeros((5, 5))
# kernel
kernel = np.array([[0, 1, 0],
[1, 0, 1],
[0, 1, 0]])
iteration_nums = 100
for _ in range(iteration_nums):
reward = convolve2d(reward_matrix, kernel, mode='same', boundary='fill', fillvalue=-1)
reward /= 4.0
# A -> A'
reward[0, 1] = 10 + reward[-1, 1]
# B -> B'
reward[0, -2] = 5 + reward[2, -2]
reward_matrix = reward
pd.DataFrame(reward_matrix)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: finite MDP
Step2: Exercise 3.4
|
14,481 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# You can ignore the pink warning that appears
import itertools
import math
import nltk
import string
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
from scipy.spatial.distance import pdist, squareform
from scipy.cluster.hierarchy import linkage, dendrogram
# code example from Building Machine Learning Systems with Python (Richert & Coelho)
# - modified slightly by Lynn
import math
def tfidf(t, d, D):
tf = float(d.count(t)) / sum(d.count(w) for w in set(d)) # normalized
# Note his version doesn't use +1 in denominator.
idf = math.log( float(len(D)) / (len([doc for doc in D if t in doc])))
return tf * idf
a, abb, abc = ["a"], ["a", "b", "b"], ["a", "b", "c"] # try adding another c to the last doc!
D = [a, abb, abc]
print(tfidf("a", a, D)) # a is in all of them
print(tfidf("a", abc, D)) # a is in all of them
print(tfidf("b", abc, D)) # b occurs only once here, but in 2 docs
print(tfidf("b", abb, D)) # b occurs more frequently in this doc
print(tfidf("c", abc, D)) # c is unique in the doc set
filelist = !ls ../data/movie_reviews/positive/*
filelist
from nltk.corpus import stopwords
import collections
def clean_tokens(tokens, stopwords):
import string
Lowercases, takes out punct and stopwords and short strings
return [token.lower() for token in tokens if (token not in string.punctuation)and (token.lower() not in stopwords) and len(token) > 2]
def makeText(filename, stopwords):
from nltk import Text
with open(filename) as handle:
text = handle.read()
return Text(clean_tokens(nltk.word_tokenize(text.decode('ascii', 'ignore')), stopwords))
def makeTextCollection(files, stopwords=stopwords.words('english')):
from nltk import TextCollection
texts= [makeText(filename, stopwords) for filename in files]
collection = TextCollection(texts)
return collection, texts
# use the data for the vocab in a single doc for a wordcloud, for instance
def compute_tfidf_by_doc(coll, texts, filenames):
tfidf_by_doc = collections.defaultdict(list)
for i, text in enumerate(texts):
for word in set(text.tokens): # just use the words in this text
tfidfscore = coll.tf_idf(word, text)
tf = coll.tf(word, text) # is actually count / len(text)
count = text.count(word)
if tfidfscore:
tfidf_by_doc[filenames[i]].append({
"word": word,
"tfidf": tfidfscore,
"tf": tf,
"count": count
})
return tfidf_by_doc
# We need to make the text collection, then use it to compute the tf-idf for the words in the docs.
res = makeTextCollection(filelist)
coll = res[0]
texts = res[1]
coll.tf_idf("woman", texts[0])
tfidfs = compute_tfidf_by_doc(coll, texts, filelist)
tfidfs[tfidfs.keys()[0]] # the first filename is the first key... it contains a list of words and scores
import json
jsonified = json.dumps(tfidfs)
with open('../outputdata/pos_movies_tfidf.json', 'w') as handle:
handle.write(jsonified)
!ls -al ../outputdata/pos_movies_tfidf.json
# Load in the docs... again. We're going to make TF-IDF vectors with sklearn (scikit-learn) because it's faster.
def load_texts(filenames, dirpath):
filenames are the leaves, dirpath is the path to them with the /
loaded_text = {}
for filen in filenames:
with open(dirpath + filen) as handle:
loaded_text[filen] = handle.read()
return loaded_text
texts = load_texts(filelist, "")
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer().fit_transform([text.decode('ascii', 'ignore') for text in texts.values()])
vectors = tfidf.toarray()
import numpy as np
from scipy.spatial.distance import pdist, squareform
from scipy.cluster.hierarchy import linkage, dendrogram
vectors
dist = pdist(vectors, metric='cosine') # look at the manpage and pick a different measure to try
linkage(dist)
# this is a base diagram, using defaults...
dendrogram(linkage(dist)) # this plotting function has a ton of things you can manipulate if you look at the docs.
texts[texts.keys()[14]]
def make_dend(data, labels=None, height=6):
from pylab import rcParams
dist = pdist(data, metric='cosine')
link = linkage(dist, method='complete')
rcParams['figure.figsize'] = 6, height
rcParams['axes.labelsize'] = 5
if not labels:
dend = dendrogram(link, orientation='right') #labels=names)
else:
dend = dendrogram(link, orientation='right', labels=[str(i) + label for i, label in enumerate(labels)])
return dist
dist = make_dend(vectors, height=15, labels=texts.keys())
texts.keys()[23]
texts[texts.keys()[23]]
texts[texts.keys()[4]]
# Code borrowed from: http://nbviewer.ipython.org/github/OxanaSachenkova/hclust-python/blob/master/hclust.ipynb
def make_heatmap_matrix(dist, method='complete'):
Pass in the distance matrix; method options are complete or single
# Compute and plot first dendrogram.
fig = plt.figure(figsize=(10,10))
# x ywidth height
ax1 = fig.add_axes([0.05,0.1,0.2,0.6])
Y = linkage(dist, method=method)
Z1 = dendrogram(Y, orientation='right') # adding/removing the axes
ax1.set_xticks([])
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.3,0.71,0.6,0.2])
Z2 = dendrogram(Y)
ax2.set_xticks([])
ax2.set_yticks([])
#Compute and plot the heatmap
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
D = squareform(dist)
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap=plt.cm.YlGnBu)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.91,0.1,0.02,0.6])
plt.colorbar(im, cax=axcolor)
make_heatmap_matrix(dist, method='complete')
## clustering in NLTK:
import numpy
from nltk.cluster import KMeansClusterer, GAAClusterer, euclidean_distance
import nltk.corpus
import nltk.stem
stemmer_func = nltk.stem.snowball.SnowballStemmer("english").stem
stopwords = set(nltk.corpus.stopwords.words('english'))
cluster = KMeansClusterer(4, euclidean_distance)
cluster.cluster(vectors)
classified_examples = [cluster.classify(vec) for vec in vectors]
for i,val in enumerate(classified_examples):
print val, texts.keys()[i]
texts['../data/movie_reviews/positive/cv677_tok-11867.txt']
texts['../data/movie_reviews/positive/cv684_tok-10367.txt']
texts['../data/movie_reviews/positive/cv683_tok-12295.txt']
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TF-IDF (Term Frequency, Inverse Document Frequency)
Step2: What if you change some of those docs, or add another one? Add another c in the last doc, e.g.
Step4: Some Utilities to Make a File We Can Save
Step6: Now we can look at these reviews as little wordclouds, using different measures to size our words. Let's work with word_clouds_tfidf.html and we can compare how our clouds look using regular word counts, term frequencies (which is count / length of the document), and tfidf across all the documents.
Step7: This gets us the input we need for clustering and making dendrograms.
Step8: Scipy's pdist is pairwise distance - see http
Step9: Let's do this with a nicer layout now...
Step10: Let's inspect a pair that are grouped closely in the cosine-similarity tree -- 23, 4
Step12: What do you notice about them both?
Step13: Relevant links
Step14: Let's look at the items in cluster 0
|
14,482 | <ASSISTANT_TASK:>
Python Code:
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
#use type casting on list genrator to store 0-9 in 8bit binary
TupleROM=tuple([bin(i, 8) for i in range(10)])
TupleROM
f'accesss location 6: {TupleROM[6]}, read contents of location 6 to dec:{int(TupleROM[6], 2)}'
#TupleROM[6]=bin(16,2)
@block
def ROMLoaded(addr, dout):
A ROM laoded with data already incoded in the structer
insted of using myHDL inchanced parmter loading
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
@always_comb
def readAction():
if addr==0:
dout.next=3
elif addr==1:
dout.next=2
elif addr==2:
dout.next=1
elif addr==3:
dout.next=0
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
DUT=ROMLoaded(addr, dout)
def ROMLoaded_TB():
Python Only Testbench for `ROMLoaded`
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMLoaded_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMLoaded');
@block
def ROMLoaded_TBV():
Verilog Only Testbench for `ROMLoaded`
clk = Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
DUT=ROMLoaded(addr, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(10)
@instance
def stimules():
for i in range(3+1):
addr.next=i
#yield delay(1)
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMLoaded_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMLoaded_TBV');
@block
def ROMParmLoad(addr, dout, CONTENT):
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
@always_comb
def readAction():
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
def ROMParmLoad_TB():
Python Only Testbench for `ROMParmLoad`
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoad_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMParmLoad');
@block
def ROMParmLoad_TBV():
Verilog Only Testbench for `ROMParmLoad`
clk=Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMParmLoad_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoad_TBV');
@block
def ROMParmLoadSync(addr, dout, clk, rst, CONTENT):
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
clk (bool): clock feed
rst (bool): reset
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
@always(clk.posedge)
def readAction():
if rst:
dout.next=0
else:
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
def ROMParmLoadSync_TB():
Python Only Testbench for `ROMParmLoadSync`
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoadSync_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
ROMData=Peeker.to_dataframe()
#keep only clock high
ROMData=ROMData[ROMData['clk']==1]
ROMData.drop(columns='clk', inplace=True)
ROMData.reset_index(drop=True, inplace=True)
ROMData
DUT.convert()
VerilogTextReader('ROMParmLoadSync');
@block
def ROMParmLoadSync_TBV():
Python Only Testbench for `ROMParmLoadSync`
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout, rst)
return instances()
#create instaince of TB
TB=ROMParmLoadSync_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoadSync_TBV');
@block
def SeqROMEx(clk, rst, dout):
Seq Read Only Memory Ex
I/O:
clk (bool): clock
rst (bool): rst on counter
dout (signal >4): data out
Count=Signal(intbv(0)[3:])
@always(clk.posedge)
def counter():
if rst:
Count.next=0
elif Count==3:
Count.next=0
else:
Count.next=Count+1
@always(clk.posedge)
def Memory():
if Count==0:
dout.next=3
elif Count==1:
dout.next=2
elif Count==2:
dout.next=1
elif Count==3:
dout.next=0
return instances()
Peeker.clear()
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=SeqROMEx(clk, rst, dout)
def SeqROMEx_TB():
Python Only Testbench for `SeqROMEx`
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
return instances()
sim = Simulation(DUT, SeqROMEx_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
SROMData=Peeker.to_dataframe()
#keep only clock high
SROMData=SROMData[SROMData['clk']==1]
SROMData.drop(columns='clk', inplace=True)
SROMData.reset_index(drop=True, inplace=True)
SROMData
DUT.convert()
VerilogTextReader('SeqROMEx');
@block
def SeqROMEx_TBV():
Verilog Only Testbench for `SeqROMEx`
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
DUT=SeqROMEx(clk, rst, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(clk, rst, dout)
return instances()
#create instaince of TB
TB=SeqROMEx_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('SeqROMEx_TBV');
@block
def RAMConcur(addr, din, writeE, dout, clk):
Random access read write memeory
I/O:
addr(signal>4): the memory cell arrdress
din (signal>4): data to write into memeory
writeE (bool): write enable contorl; false is read only
dout (signal>4): the data out
clk (bool): clock
Note:
this is only a 4 byte memory
#create the memeory list (1D array)
memory=[Signal(intbv(0)[4:]) for i in range(4)]
@always(clk.posedge)
def writeAction():
if writeE:
memory[addr].next=din
@always_comb
def readAction():
dout.next=memory[addr]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
din=Signal(intbv(0)[4:]); Peeker(din, 'din')
writeE=Signal(bool(0)); Peeker(writeE, 'writeE')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
def RAMConcur_TB():
Python Only Testbench for `RAMConcur`
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, RAMConcur_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
RAMData=Peeker.to_dataframe()
RAMData=RAMData[RAMData['clk']==1]
RAMData.drop(columns='clk', inplace=True)
RAMData.reset_index(drop=True, inplace=True)
RAMData
RAMData[RAMData['writeE']==1]
RAMData[RAMData['writeE']==0]
DUT.convert()
VerilogTextReader('RAMConcur');
@block
def RAMConcur_TBV():
Verilog Only Testbench for `RAMConcur`
addr=Signal(intbv(0)[4:])
din=Signal(intbv(0)[4:])
writeE=Signal(bool(0))
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(addr, din, writeE, dout, clk)
return instances()
#create instaince of TB
TB=RAMConcur_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('RAMConcur_TBV');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: RTL and Implimentation Schamatics are from Xilinx Vivado 2016.1
Step2: And if we try writing to the tuple we will get an error
Step5: Random and Sequntial Access Memory
Step7: ROMLoaded RTL
Step10: With myHDL we can dynamicaly load the contents that will be hard coded in the convertion to verilog/VHDL wich is an ammazing benfict for devlopment as is sean here
Step12: ROMParmLoad RTL
Step15: we can also create rom that insted of being asynchronous is synchronous
Step19: ROMParmLoadSync RTL
Step21: SeqROMEx RTL
Step24: read and write memory
Step26: RAMConcur RTL
|
14,483 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from Schelling import SchellingModel
model = SchellingModel(20, 20, 0.85, 0.2, 3)
while model.running and model.schedule.steps < 100:
model.step()
print(model.schedule.steps) # Show how many steps have actually run
model_out = model.datacollector.get_model_vars_dataframe()
model_out.head()
model_out.unhappy.plot()
x_positions = model.datacollector.get_agent_vars_dataframe()
from mesa.batchrunner import BatchRunner
def get_segregation(model):
'''
Find the % of agents that only have neighbors of their same type.
'''
segregated_agents = 0
for agent in model.schedule.agents:
segregated = True
for neighbor in model.grid.neighbor_iter(agent.pos):
if neighbor.type != agent.type:
segregated = False
break
if segregated:
segregated_agents += 1
return segregated_agents / model.schedule.get_agent_count()
parameters = {"height": 10, "width": 10, "density": 0.8, "minority_pc": 0.2,
"homophily": range(1,9)}
model_reporters = {"Segregated_Agents": get_segregation}
param_sweep = BatchRunner(SchellingModel, parameters, iterations=10,
max_steps=200,
model_reporters=model_reporters)
param_sweep.run_all()
df = param_sweep.get_model_vars_dataframe()
plt.scatter(df.homophily, df.Segregated_Agents)
plt.grid(True)
import numpy as np
import holoviews as hv
%load_ext holoviews.ipython
def get_color(agent):
if agent is None:
return 0
elif agent.type == 0:
return 1
elif agent.type == 1:
return 2
def get_grid(model):
width = model.grid.width
height = model.grid.height
data = np.zeros((width, height))
for x in range(width):
for y in range(height):
data[x][y] = get_color(model.grid[y][x])
return data
hmap = hv.HoloMap()
model = SchellingModel(20, 20, 0.80, 0.2, 3)
for i in range(100):
data = get_grid(model)
hmap[i] = hv.Image(data)
model.step()
%%opts Image plot[figure_inches=(7,7)]
hmap
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we instantiate a model instance
Step2: Effect of Homophily on segregation
Step3: Now, we set up the batch run, with a dictionary of fixed and changing parameters. Let's hold everything fixed except for Homophily.
Step4: Other Packages
|
14,484 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import nltk
import string
import matplotlib.pyplot as plt
#read in our data
df = pd.read_csv("../Data/childrens_lit.csv.bz2", sep = '\t', encoding = 'utf-8', compression = 'bz2', index_col=0)
df = df.dropna(subset=["text"])
df
import numpy as np
np.random.seed(1)
df = df.sample(5)
df
# Your code here
df['text_lc'] = df['text'].str.lower()
df['text_split'] = df['text_lc'].apply(nltk.word_tokenize)
df['text_split_clean'] = df['text_split'].apply(lambda x : [word for word in x if word not in string.punctuation])
df
df['text_length'] = df['text_split_clean'].apply(len)
df
# Your code here
pos_sent = open("../Data/positive_words.txt", encoding='utf-8').read()
neg_sent = open("../Data/negative_words.txt", encoding='utf-8').read()
positive_words = pos_sent.split('\n')
negative_words = neg_sent.split('\n')
df['num_pos_words'] = df['text_split_clean'].apply(lambda x: len([word for word in x if word in positive_words]))
df['num_neg_words'] = df['text_split_clean'].apply(lambda x: len([word for word in x if word in negative_words]))
df
df['prop_pos_words'] = df['num_pos_words']/df['text_length']
df['prop_neg_words'] = df['num_neg_words']/df['text_length']
df
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Since the number of children literaturs is a lot to analyze, we'll just randomly select 5 books to do a sentiment analysis using the dictionary method.
Step2: Question 2
|
14,485 | <ASSISTANT_TASK:>
Python Code:
class AlarmSensor:
def run(self):
print ("Alarm Ring...")
class WaterSprinker:
def run(self):
print ("Spray Water...")
class EmergencyDialer:
def run(self):
print ("Dial 119...")
alarm_sensor = AlarmSensor()
water_sprinker = WaterSprinker()
emergency_dialer = EmergencyDialer()
alarm_sensor.run()
water_sprinker.run()
emergency_dialer.run()
class EmergencyFacade(object):
def __init__(self):
self.alarm_sensor=AlarmSensor()
self.water_sprinker=WaterSprinker()
self.emergency_dialer=EmergencyDialer()
def runAll(self):
self.alarm_sensor.run()
self.water_sprinker.run()
self.emergency_dialer.run()
emergency_facade=EmergencyFacade()
emergency_facade.runAll()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 在业务中如果需要将三个部件启动,例如,如果有一个烟雾传感器,检测到了烟雾。在业务环境中需要做如下操作:
Step2: 但如果在多个业务场景中需要启动三个部件,怎么办?Ctrl+C加上Ctrl+V么?当然可以这样,但作为码农的基本修养之一,减少重复代码是应该会被很轻易想到的方法。这样,需要将其进行封装,在设计模式中,被封装成的新对象,叫做门面。门面构建如下
|
14,486 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%precision 2
vocabulary = ['see', 'spot', 'run']
num_terms = len(vocabulary)
num_topics = 2 # K
num_documents = 5 # M
mean_document_length = 5 # xi
term_dirichlet_parameter = 1 # beta
topic_dirichlet_parameter = 1 # alpha
from scipy.stats import dirichlet, poisson
from numpy import round
from collections import defaultdict
from random import choice as stl_choice
term_dirichlet_vector = num_terms * [term_dirichlet_parameter]
term_distributions = dirichlet(term_dirichlet_vector, 2).rvs(size=num_topics)
print term_distributions
base_distribution = lambda: stl_choice(term_distributions)
# A sample from base_distribution is a distribution over terms
# Each of our two topics has equal probability
from collections import Counter
for topic, count in Counter([tuple(base_distribution()) for _ in range(10000)]).most_common():
print "count:", count, "topic:", [round(prob, 2) for prob in topic]
from scipy.stats import beta
from numpy.random import choice
class DirichletProcessSample():
def __init__(self, base_measure, alpha):
self.base_measure = base_measure
self.alpha = alpha
self.cache = []
self.weights = []
self.total_stick_used = 0.
def __call__(self):
remaining = 1.0 - self.total_stick_used
i = DirichletProcessSample.roll_die(self.weights + [remaining])
if i is not None and i < len(self.weights) :
return self.cache[i]
else:
stick_piece = beta(1, self.alpha).rvs() * remaining
self.total_stick_used += stick_piece
self.weights.append(stick_piece)
new_value = self.base_measure()
self.cache.append(new_value)
return new_value
@staticmethod
def roll_die(weights):
if weights:
return choice(range(len(weights)), p=weights)
else:
return None
topic_distribution = DirichletProcessSample(base_measure=base_distribution,
alpha=topic_dirichlet_parameter)
for topic, count in Counter([tuple(topic_distribution()) for _ in range(10000)]).most_common():
print "count:", count, "topic:", [round(prob, 2) for prob in topic]
topic_index = defaultdict(list)
documents = defaultdict(list)
for doc in range(num_documents):
topic_distribution_rvs = DirichletProcessSample(base_measure=base_distribution,
alpha=topic_dirichlet_parameter)
document_length = poisson(mean_document_length).rvs()
for word in range(document_length):
topic_distribution = topic_distribution_rvs()
topic_index[doc].append(tuple(topic_distribution))
documents[doc].append(choice(vocabulary, p=topic_distribution))
for doc in documents.values():
print doc
for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()):
print "Doc:", i
for topic, count in doc:
print 5*" ", "count:", count, "topic:", [round(prob, 2) for prob in topic]
term_dirichlet_vector = num_terms * [term_dirichlet_parameter]
base_distribution = lambda: dirichlet(term_dirichlet_vector).rvs(size=1)[0]
base_dp_parameter = 10
base_dp = DirichletProcessSample(base_distribution, alpha=base_dp_parameter)
nested_dp_parameter = 10
topic_index = defaultdict(list)
documents = defaultdict(list)
for doc in range(num_documents):
topic_distribution_rvs = DirichletProcessSample(base_measure=base_dp,
alpha=nested_dp_parameter)
document_length = poisson(mean_document_length).rvs()
for word in range(document_length):
topic_distribution = topic_distribution_rvs()
topic_index[doc].append(tuple(topic_distribution))
documents[doc].append(choice(vocabulary, p=topic_distribution))
for doc in documents.values():
print doc
for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()):
print "Doc:", i
for topic, count in doc:
print 5*" ", "count:", count, "topic:", [round(prob, 2) for prob in topic]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Latent Dirichlet Allocation is a generative model for topic modeling. Given a collection of documents, an LDA inference algorithm attempts to determined (in an unsupervised manner) the topics discussed in the documents. It makes the assumption that each document is generated by a probability model, and, when doing inference, we try to find the parameters that best fit the model (as well as unseen/latent variables generated by the model). If you are unfamiliar with LDA, Edwin Chen has a friendly introduction you should read.
Step2: The term distribution vector $\underline\Phi$ is a collection of samples from a Dirichlet distribution. This describes how our 3 terms are distributed across each of the two topics.
Step3: Each document corresponds to a categorical distribution across this distribution of topics (in this case, a 2-dimensional categorical distribution). This categorical distribution is a distribution of distributions; we could look at it as a Dirichlet process!
Step4: Recall that a sample from a Dirichlet process is a distribution that approximates (but varies from) the base distribution. In this case, a sample from the Dirichlet process will be a distribution over topics that varies from the uniform distribution we provided as a base. If we use the stick-breaking metaphor, we are effectively breaking a stick one time and the size of each portion corresponds to the proportion of a topic in the document.
Step5: For each document, we will draw a topic distribution from the Dirichlet process
Step6: A sample from this topic distribution is a distribution over terms. However, unlike our base distribution which returns each term distribution with equal probability, the topics will be unevenly weighted.
Step7: To generate each word in the document, we draw a sample topic from the topic distribution, and then a term from the term distribution (topic).
Step8: Here are the documents we generated
Step9: We can see how each topic (term-distribution) is distributed across the documents
Step10: To recap
Step11: This sample from the base Dirichlet process is our infinite sided die. It is a probability distribution over a countable infinite number of topics.
Step12: Here are the documents we generated
Step13: And here are the latent topics used
|
14,487 | <ASSISTANT_TASK:>
Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.ethics.ex4 import *
import pandas as pd
from sklearn.model_selection import train_test_split
# Load the data, separate features from target
data = pd.read_csv("../input/synthetic-credit-card-approval/synthetic_credit_card_approval.csv")
X = data.drop(["Target"], axis=1)
y = data["Target"]
# Break into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
# Preview the data
print("Data successfully loaded!\n")
X_train.head()
from sklearn import tree
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Train a model and make predictions
model_baseline = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_baseline.fit(X_train, y_train)
preds_baseline = model_baseline.predict(X_test)
# Function to plot confusion matrix
def plot_confusion_matrix(estimator, X, y_true, y_pred, display_labels=["Deny", "Approve"],
include_values=True, xticks_rotation='horizontal', values_format='',
normalize=None, cmap=plt.cm.Blues):
cm = confusion_matrix(y_true, y_pred, normalize=normalize)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=display_labels)
return cm, disp.plot(include_values=include_values, cmap=cmap, xticks_rotation=xticks_rotation,
values_format=values_format)
# Function to evaluate the fairness of the model
def get_stats(X, y, model, group_one, preds):
y_zero, preds_zero, X_zero = y[group_one==False], preds[group_one==False], X[group_one==False]
y_one, preds_one, X_one = y[group_one], preds[group_one], X[group_one]
print("Total approvals:", preds.sum())
print("Group A:", preds_zero.sum(), "({}% of approvals)".format(round(preds_zero.sum()/sum(preds)*100, 2)))
print("Group B:", preds_one.sum(), "({}% of approvals)".format(round(preds_one.sum()/sum(preds)*100, 2)))
print("\nOverall accuracy: {}%".format(round((preds==y).sum()/len(y)*100, 2)))
print("Group A: {}%".format(round((preds_zero==y_zero).sum()/len(y_zero)*100, 2)))
print("Group B: {}%".format(round((preds_one==y_one).sum()/len(y_one)*100, 2)))
cm_zero, disp_zero = plot_confusion_matrix(model, X_zero, y_zero, preds_zero)
disp_zero.ax_.set_title("Group A")
cm_one, disp_one = plot_confusion_matrix(model, X_one, y_one, preds_one)
disp_one.ax_.set_title("Group B")
print("\nSensitivity / True positive rate:")
print("Group A: {}%".format(round(cm_zero[1,1] / cm_zero[1].sum()*100, 2)))
print("Group B: {}%".format(round(cm_one[1,1] / cm_one[1].sum()*100, 2)))
# Evaluate the model
get_stats(X_test, y_test, model_baseline, X_test["Group"]==1, preds_baseline)
# Check your answer (Run this code cell to get credit!)
q_1.check()
def visualize_model(model, feature_names, class_names=["Deny", "Approve"], impurity=False):
plot_list = tree.plot_tree(model, feature_names=feature_names, class_names=class_names, impurity=impurity)
[process_plot_item(item) for item in plot_list]
def process_plot_item(item):
split_string = item.get_text().split("\n")
if split_string[0].startswith("samples"):
item.set_text(split_string[-1])
else:
item.set_text(split_string[0])
plt.figure(figsize=(20, 6))
plot_list = visualize_model(model_baseline, feature_names=X_train.columns)
# Check your answer (Run this code cell to get credit!)
q_2.check()
# Create new dataset with gender removed
X_train_unaware = X_train.drop(["Group"],axis=1)
X_test_unaware = X_test.drop(["Group"],axis=1)
# Train new model on new dataset
model_unaware = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_unaware.fit(X_train_unaware, y_train)
# Evaluate the model
preds_unaware = model_unaware.predict(X_test_unaware)
get_stats(X_test_unaware, y_test, model_unaware, X_test["Group"]==1, preds_unaware)
# Check your answer (Run this code cell to get credit!)
q_3.check()
# Change the value of zero_threshold to hit the objective
zero_threshold = 0.11
one_threshold = 0.99
# Evaluate the model
test_probs = model_unaware.predict_proba(X_test_unaware)[:,1]
preds_approval = (((test_probs>zero_threshold)*1)*[X_test["Group"]==0] + ((test_probs>one_threshold)*1)*[X_test["Group"]==1])[0]
get_stats(X_test, y_test, model_unaware, X_test["Group"]==1, preds_approval)
# Check your answer (Run this code cell to get credit!)
q_4.check()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dataset contains, for each applicant
Step2: The confusion matrices above show how the model performs on some test data. We also print additional information (calculated from the confusion matrices) to assess fairness of the model. For instance,
Step3: Run the next code cell without changes to visualize the model.
Step4: The flowchart shows how the model makes decisions
Step5: Next, you decide to remove group membership from the training data and train a new model. Do you think this will make the model treat the groups more equally?
Step6: 3) Varieties of fairness, part 2
Step7: You decide to train a third potential model, this time with the goal of having each group have even representation in the group of approved applicants. (This is an implementation of group thresholds, which you can optionally read more about here.)
Step8: 4) Varieties of fairness, part 3
|
14,488 | <ASSISTANT_TASK:>
Python Code:
import wasmfun
instructions = [('f64.const', 42),
('call', 'print_ln'),
('call', 'make_background_blue')]
m = wasmfun.Module(
wasmfun.Function('$main', params=[], returns=[], locals=[], instructions=instructions),
wasmfun.ImportedFuncion('print_ln', ['f64'], [], 'js', 'print_ln'),
wasmfun.ImportedFuncion('make_background_blue', [], [], 'js', 'make_background_blue'),
)
m.show()
print(m.to_bytes())
print(len(m.to_bytes()))
JS =
function print_ln(x) {
var el = document.getElementById('wasm_output');
el.innerHTML += String(x).replace('\\n', '<br>') + '<br>';
}
function make_background_blue () {
document.body.style = 'background:#48f;'
}
function compile_my_wasm(wasm_data) {
var m = new WebAssembly.Module(wasm_data);
var i = new WebAssembly.Instance(m, {js: {print_ln, make_background_blue}});
}
from IPython.display import display, HTML, Javascript
import uuid
def run_wasm(m):
id = uuid.uuid1().hex
js = JS.replace('wasm_output', 'wasm_output_' + id)
js += "compile_my_wasm(new Uint8Array(%s));" % str(list(m.to_bytes()))
display(HTML("<div style='border: 2px solid blue;' id='wasm_output_%s'>WASM output goes here<br></div>" % id))
display(Javascript(js))
run_wasm(m)
instructions = [
('loop', 'emptyblock'),
# write iter
('get_local', 0), ('call', 'print_ln'),
# Increase iter
('f64.const', 1), ('get_local', 0), ('f64.add'),
('tee_local', 0), ('f64.const', 10),
('f64.lt'), ('br_if', 0),
('end'),
]
m = wasmfun.Module(
wasmfun.Function('$main', params=[], returns=[], locals=['f64'], instructions=instructions),
wasmfun.ImportedFuncion('print_ln', ['f64'], [], 'js', 'print_ln'),
)
m.to_bytes()
wasmfun.run_wasm_in_notebook(m)
wasmfun.run_wasm_in_node(m)
def bf2instructions(commands):
Compile brainfuck commands to WASM instructions (as tuples).
instructions = []
while commands:
c = commands.pop(0)
if c == '>':
instructions += [('get_local', 0), ('i32.const', 1), ('i32.add'), ('set_local', 0)]
elif c == '<':
instructions += [('get_local', 0), ('i32.const', 1), ('i32.sub'), ('set_local', 0)]
elif c == '+':
instructions += [('get_local', 0), ('get_local', 0), # once for the read, once for the write
('i32.load8_u', 0, 0),
('i32.const', 1), ('i32.add'), ('i32.store8', 0, 0)]
elif c == '-':
instructions += [('get_local', 0), ('get_local', 0), # once for the read, once for the write
('i32.load8_u', 0, 0),
('i32.const', 1), ('i32.sub'), ('i32.store8', 0, 0)]
elif c == '.':
instructions += [('get_local', 0), ('i32.load8_u', 0, 0), ('call', 0)]
elif c == ',':
# We don't support input, just set to zero
instructions += [('get_local', 0), ('i32.const', 0), ('i32.store8', 0, 0)]
elif c == '[':
instructions += [('block', 'emptyblock'),
# if current data point == 0 goto end of block
('get_local', 0), ('i32.load8_u', 0, 0), ('i32.const', 0), ('i32.eq'), ('br_if', 0),
('loop', 'emptyblock'),
] + bf2instructions(commands ) + [
# if current data point > 0 goto start of block
('get_local', 0), ('i32.load8_u', 0, 0), ('i32.const', 0), ('i32.ne'), ('br_if', 0),
('end'),
('end')]
elif c == ']':
break
else:
pass # ignore
return instructions
BF_HELLO =
[This program prints "Hello World!" and a newline to the screen]
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.
>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.
instructions = bf2instructions(list(BF_HELLO))
m = wasmfun.Module(
wasmfun.ImportedFuncion('print_charcode', ['i32'], [], 'js', 'print_charcode'),
wasmfun.Function('$main', [], [], ['i32'], instructions),
wasmfun.MemorySection((1, 1)),
wasmfun.DataSection(),
)
wasmfun.run_wasm_in_notebook(m)
BF_FIBONACCI =
[Generate the fibonacci number sequence, (for numbers under 100). Taken from
http://esoteric.sange.fi/brainfuck/bf-source/prog/fibonacci.txt
]
+++++++++++>+>>>>++++++++++++++++++++++++++++++++++++++++++++>
++++++++++++++++++++++++++++++++<<<<<<[>[>>>>>>+>+<<<<<<<-]>>>>>>>
[<<<<<<<+>>>>>>>-]<[>++++++++++[-<-[>>+>+<<<-]>>>[<<<+>>>-]+<[>[-]
<[-]]>[<<[>>>+<<<-]>>[-]]<<]>>>[>>+>+<<<-]>>>[<<<+>>>-]+<[>[-]<[-]]>
[<<+>>[-]]<<<<<<<]>>>>>[++++++++++++++++++++++++++++++++++++++++++++++++.
[-]]++++++++++<[->-<]>++++++++++++++++++++++++++++++++++++++++++++++++.[-]
<<<<<<<<<<<<[>>>+>+<<<<-]>>>>[<<<<+>>>>-]<-[>>.>.<<<[-]]<<[>>+>+<<<-]>>>
[<<<+>>>-]<<[<+>-]>[<+>-]<<<-]
m = wasmfun.Module(
wasmfun.ImportedFuncion('print_charcode', ['i32'], [], 'js', 'print_charcode'),
wasmfun.Function('$main', [], [], ['i32'], bf2instructions(list(BF_FIBONACCI))),
wasmfun.MemorySection((1, 1)),
wasmfun.DataSection(),
)
wasmfun.run_wasm_in_notebook(m)
FIB_CODE =
a = 0
b = 1
for i in range(10):
print(a)
c = b
b = a + b
a = c
exec(FIB_CODE)
import ast
tree = ast.parse(FIB_CODE)
tree.body
def _compile_expr(node, ctx, push_stack):
if isinstance(node, ast.Expr):
_compile_expr(node.value, ctx, push_stack)
elif isinstance(node, ast.Assign):
if not (len(node.targets) == 1 and isinstance(node.targets[0], ast.Name)):
raise SyntaxError('Unsupported assignment at line', node.lineno)
idx = ctx.name_idx(node.targets[0].id)
_compile_expr(node.value, ctx, True)
ctx.instructions.append(('set_local', idx))
assert not push_stack
elif isinstance(node, ast.Name):
assert push_stack
ctx.instructions.append(('get_local', ctx.name_idx(node.id)))
elif isinstance(node, ast.Num):
ctx.instructions.append(('f64.const', node.n))
elif isinstance(node, ast.UnaryOp):
_compile_expr(node.operand, ctx, True)
if isinstance(node.op, ast.USub):
ctx.instructions.append(('f64.neg'))
else:
raise SyntaxError('Unsupported unary operator: %s' % node.op.__class__.__name__)
elif isinstance(node, ast.BinOp):
_compile_expr(node.left, ctx, True)
_compile_expr(node.right, ctx, True)
if isinstance(node.op, ast.Add):
ctx.instructions.append(('f64.add'))
elif isinstance(node.op, ast.Sub):
ctx.instructions.append(('f64.sub'))
elif isinstance(node.op, ast.Mult):
ctx.instructions.append(('f64.mul'))
elif isinstance(node.op, ast.Div):
ctx.instructions.append(('f64.div'))
elif isinstance(node.op, ast.Mod):
# todo: this is fragile. E.g. for negative numbers
_compile_expr(node.left, ctx, True) # push again
_compile_expr(node.right, ctx, True)
ctx.instructions.append(('f64.div'))
ctx.instructions.append(('f64.floor'))
ctx.instructions.append(('f64.mul')) # consumes last right
ctx.instructions.append(('f64.sub')) # consumes last left
elif isinstance(node.op, ast.FloorDiv):
ctx.instructions.append(('f64.div'))
ctx.instructions.append(('f64.floor')) # not trunc
else:
raise SyntaxError('Unsuppored binary op: %s' % node.op.__class__.__name__)
if not push_stack:
ctx.instructions.append(('drop'))
elif isinstance(node, ast.Compare):
if len(node.ops) != 1:
raise SyntaxError('Only supports binary comparators (one operand).')
_compile_expr(node.left, ctx, True)
_compile_expr(node.comparators[0], ctx, True)
op = node.ops[0]
if isinstance(op, ast.Eq):
ctx.instructions.append(('f64.eq'))
elif isinstance(op, ast.NotEq):
ctx.instructions.append(('f64.ne'))
elif isinstance(op, ast.Gt):
ctx.instructions.append(('f64.qt'))
elif isinstance(op, ast.Lt):
ctx.instructions.append(('f64.lt'))
elif isinstance(op, ast.GtE):
ctx.instructions.append(('f64.qe'))
elif isinstance(op, ast.LtE):
ctx.instructions.append(('f64.le'))
else:
raise SyntaxError('Unsupported operand: %s' % op)
elif isinstance(node, ast.If):
_compile_expr(node.test, ctx, True)
assert not push_stack # Python is not an expression lang
ctx.push_block('if')
ctx.instructions.append(('if', 'emptyblock'))
for e in node.body:
_compile_expr(e, ctx, False)
if node.orelse:
ctx.instructions.append(('else', ))
for e in node.orelse:
_compile_expr(e, ctx, False)
ctx.instructions.append(('end', ))
ctx.pop_block('if')
elif isinstance(node, ast.For):
# Check whether this is the kind of simple for-loop that we support
if not (isinstance(node.iter, ast.Call) and node.iter.func.id == 'range'):
raise SyntaxError('For-loops are limited to range().')
if node.orelse:
raise SyntaxError('For-loops do not support orelse.')
if not isinstance(node.target, ast.Name):
raise SyntaxError('For-loops support just one iterable.')
# Prepare start, stop, step
start_stub = ctx.new_stub()
end_stub = ctx.new_stub()
step_stub = ctx.new_stub()
if len(node.iter.args) == 1:
ctx.instructions.append(('f64.const', 0))
_compile_expr(node.iter.args[0], ctx, True)
ctx.instructions.append(('f64.const', 1))
elif len(node.iter.args) == 2:
_compile_expr(node.iter.args[0], ctx, True)
_compile_expr(node.iter.args[1], ctx, True)
ctx.instructions.append(('f64.const', 1))
elif len(node.iter.args) == 3:
_compile_expr(node.iter.args[0], ctx, True)
_compile_expr(node.iter.args[1], ctx, True)
_compile_expr(node.iter.args[2], ctx, True)
else:
raise SyntaxError('range() should have 1, 2, or 3 args')
ctx.instructions.append(('set_local', step_stub)) # reversed order, pop from stack
ctx.instructions.append(('set_local', end_stub))
ctx.instructions.append(('set_local', start_stub))
# Body
target = ctx.name_idx(node.target.id)
ctx.push_block('for')
for i in [('get_local', start_stub), ('set_local', target), # Init target
('block', 'emptyblock'), ('loop', 'emptyblock'), # enter loop
('get_local', target), ('get_local', end_stub), ('f64.ge'), ('br_if', 1), # break (level 2)
]:
ctx.instructions.append(i)
for subnode in node.body:
_compile_expr(subnode, ctx, False)
for i in [('get_local', target), ('get_local', step_stub), ('f64.add'), ('set_local', target), # next iter
('br', 0), # loop
('end'), ('end'), # end of loop and outer block
]:
ctx.instructions.append(i)
ctx.pop_block('for')
elif isinstance(node, ast.While):
# Check whether this is the kind of simple for-loop that we support
if node.orelse:
raise SyntaxError('While-loops do not support orelse.')
# Body
ctx.push_block('while')
for i in [('block', 'emptyblock'), ('loop', 'emptyblock'), # enter loop (outer block for break)
]:
ctx.instructions.append(i)
for subnode in node.body:
_compile_expr(subnode, ctx, False)
_compile_expr(node.test, ctx, True)
for i in [('br_if', 0), # loop
('end'), ('end'), # end of loop
]:
ctx.instructions.append(i)
ctx.pop_block('while')
elif isinstance(node, ast.Continue):
ctx.instructions.append(('br', ctx.get_block_level()))
elif isinstance(node, ast.Break):
ctx.instructions.append(('br', ctx.get_block_level() + 1))
elif isinstance(node, ast.Call):
if not isinstance(node.func, ast.Name):
raise SyntaxError('Only support simple function names')
if node.keywords:
raise SyntaxError('No support for keyword args')
name = node.func.id
if name == 'print':
assert len(node.args) == 1, 'print() accepts exactly one argument'
_compile_expr(node.args[0], ctx, True)
ctx.instructions.append(('call', 0))
elif name == 'perf_counter':
assert len(node.args) == 0, 'perf_counter() accepts exactly zero arguments'
ctx.instructions.append(('call', 1))
else:
raise SyntaxError('Not a supported function: %s' % name)
else:
raise SyntaxError('Unsupported syntax: %s' % node.__class__.__name__)
class Context:
def __init__(self):
self.instructions = []
self.names = {}
self._name_counter = 0
self._block_stack = []
def name_idx(self, name):
if name not in self.names:
self.names[name] = self._name_counter
self._name_counter += 1
return self.names[name]
def new_stub(self):
name = 'stub' + str(self._name_counter)
return self.name_idx(name)
def push_block(self, kind):
assert kind in ('if', 'for', 'while')
self._block_stack.append(kind)
def pop_block(self, kind):
assert self._block_stack.pop(-1) == kind
def get_block_level(self):
for i, kind in enumerate(reversed(self._block_stack)):
if kind in ('for', 'while'):
return i
def py2wasm(python_code):
# Convert to AST
tree = ast.parse(python_code)
# Compile to instructions
ctx = Context()
for node in tree.body:
_compile_expr(node, ctx, False)
# Produce wasm module
return wasmfun.Module(
wasmfun.Function('$main', [], [], ['f64' for i in ctx.names], ctx.instructions),
wasmfun.ImportedFuncion('print_ln', ['f64'], [], 'js', 'print_ln'),
wasmfun.ImportedFuncion('perf_counter', [], ['f64'], 'js', 'perf_counter'),
)
m = py2wasm(
a = 0
b = 1
for i in range(10):
print(a)
c = b
b = a + b
a = c
)
len(m.to_bytes())
wasmfun.run_wasm_in_notebook(m)
PRIMES_CODE =
max = 4000
n = 0
i = -1
t0 = perf_counter()
while n < max:
i = i + 1
if i <= 1:
continue # nope
elif i == 2:
n = n + 1
else:
gotit = 1
for j in range(2, i//2 + 1):
if i % j == 0:
gotit = 0
break
if gotit == 1:
n = n + 1
print(perf_counter() - t0)
print(i)
from time import perf_counter
exec(PRIMES_CODE)
wasmfun.run_wasm_in_notebook(py2wasm(PRIMES_CODE))
wasmfun.run_wasm_in_node(py2wasm(PRIMES_CODE))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is Web Assembly?
Step2: Instructions are packed into functions ...
Step3: Web Assembly modules have a compact binary format
Step5: Web Assembly is safe
Step6: Let's run our module in the browser!
Step7: Again, now with a for-loop
Step8: Web Assembly will run anywhere!
Step12: Before moving on ...
Step14: Python
Step15: From code to AST
Step17: From AST to WASM
Step19: Prime numbers example
Step20: Find primes in Python
Step21: Find primes is the browser
Step22: Find primes on desktop
|
14,489 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
from scipy.spatial.distance import cosine
# Data was already dlownloaded.
data = pd.read_csv('data/lastfm/lastfm-matrix-germany.csv')
# check out the data set you can do so using data.head():
data.head(6).ix[:,2:10]
#In item based collaborative filtering we do not care about the user column.
data_germany = data.drop('user', 1)
#Create a placeholder dataframe listing item vs. item
data_ibs = pd.DataFrame(index=data_germany.columns, columns=data_germany.columns)
# Lets fill in those empty spaces with cosine similarities
# Loop through the columns
for i in range(0, len(data_ibs.columns)) :
# Loop through the columns for each column
for j in range(0,len(data_ibs.columns)) :
# Fill in placeholder with cosine similarities
data_ibs.ix[i,j] = 1 - cosine(data_germany.ix[:,i], data_germany.ix[:,j])
# Create a placeholder items for closes neighbours to an item
data_neighbours = pd.DataFrame(index=data_ibs.columns,columns=range(1,11))
# Loop through our similarity dataframe and fill in neighbouring item names
for i in range(0,len(data_ibs.columns)):
data_neighbours.ix[i,:10] = data_ibs.ix[0:,i].order(ascending=False)[:10].index
# --- End Item Based Recommendations --- #
# Create a placeholder items for closes neighbours to an item
data_neighbours = pd.DataFrame(index=data_ibs.columns, columns=range(1,11))
# Loop through our similarity dataframe and fill in neighbouring item names
for i in range(0,len(data_ibs.columns)):
data_neighbours.ix[i,:10] = data_ibs.ix[0:,i].order(ascending=False)[:10].index
Show the results!
data_neighbours.ix[:10, :5]
# Helper function to get similarity scores
def getScore(history, similarities):
return sum(history * similarities) / sum(similarities)
# Create a place holder matrix for similarities, and fill in the user name column
data_sims = pd.DataFrame(index=data.index,columns=data.columns)
data_sims.ix[:,:1] = data.ix[:,:1]
data_sims.head(3).ix[:, :10]
#Loop through all rows, skip the user column, and fill with similarity scores
for i in range(0, len(data_sims.index)):
for j in range(1,len(data_sims.columns)):
user = data_sims.index[i]
product = data_sims.columns[j]
if data.ix[i][j] == 1:
data_sims.ix[i][j] = 0
else:
product_top_names = data_neighbours.ix[product][1:10]
product_top_sims = data_ibs.ix[product].order(ascending=False)[1:10]
user_purchases = data_germany.ix[user, product_top_names]
data_sims.ix[i][j] = getScore(user_purchases, product_top_sims)
# Get the top songs
data_recommend = pd.DataFrame(index=data_sims.index, columns=['user','1','2','3','4','5','6'])
data_recommend.ix[0:,0] = data_sims.ix[:,0]
# Instead of top song scores, we want to see names
for i in range(0,len(data_sims.index)):
data_recommend.ix[i,1:] = data_sims.ix[i,:].order(ascending=False).ix[1:7,].index.transpose()
# Print a sample
print data_recommend.ix[:4,:5]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Item Based Collaborative Filtering
Step2: Now we can start to look at filling in similarities. We will use Cosin Similarities. In Python, the Scipy library has a function that allows us to do this without customization.
Step3: With our similarity matrix filled out we can look for each items “neighbour” by looping through ‘data_ibs’, sorting each column in descending order, and grabbing the name of each of the top 10 songs.
Step4: User Based collaborative Filtering
Step5: The rest is a matter of applying this function to the data frames in the right way.
Step6: We now loop through the rows and columns filling in empty spaces with similarity scores.
Step7: We can now produc a matrix of User Based recommendations as follows
Step8: Instead of having the matrix filled with similarity scores, however, it would be nice to see the song names.
|
14,490 | <ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Barachant <alexandre.barachant@gmail.com>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import Epochs
from mne.decoding import SPoC
from mne.datasets.fieldtrip_cmc import data_path
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
from sklearn.model_selection import KFold, cross_val_predict
# Define parameters
fname = data_path() + '/SubjectCMC.ds'
raw = mne.io.read_raw_ctf(fname)
raw.crop(50., 250.) # crop for memory purposes
# Filter muscular activity to only keep high frequencies
emg = raw.copy().pick_channels(['EMGlft']).load_data()
emg.filter(20., None, fir_design='firwin')
# Filter MEG data to focus on beta band
raw.pick_types(meg=True, ref_meg=True, eeg=False, eog=False).load_data()
raw.filter(15., 30., fir_design='firwin')
# Build epochs as sliding windows over the continuous raw file
events = mne.make_fixed_length_events(raw, id=1, duration=.250)
# Epoch length is 1.5 second
meg_epochs = Epochs(raw, events, tmin=0., tmax=1.500, baseline=None,
detrend=1, decim=8)
emg_epochs = Epochs(emg, events, tmin=0., tmax=1.500, baseline=None)
# Prepare classification
X = meg_epochs.get_data()
y = emg_epochs.get_data().var(axis=2)[:, 0] # target is EMG power
# Classification pipeline with SPoC spatial filtering and Ridge Regression
spoc = SPoC(n_components=2, log=True, reg='oas', rank='full')
clf = make_pipeline(spoc, Ridge())
# Define a two fold cross-validation
cv = KFold(n_splits=2, shuffle=False)
# Run cross validaton
y_preds = cross_val_predict(clf, X, y, cv=cv)
# Plot the True EMG power and the EMG power predicted from MEG data
fig, ax = plt.subplots(1, 1, figsize=[10, 4])
times = raw.times[meg_epochs.events[:, 0] - raw.first_samp]
ax.plot(times, y_preds, color='b', label='Predicted EMG')
ax.plot(times, y, color='r', label='True EMG')
ax.set_xlabel('Time (s)')
ax.set_ylabel('EMG Power')
ax.set_title('SPoC MEG Predictions')
plt.legend()
mne.viz.tight_layout()
plt.show()
spoc.fit(X, y)
spoc.plot_patterns(meg_epochs.info)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the contributions to the detected components (i.e., the forward model)
|
14,491 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
14,492 | <ASSISTANT_TASK:>
Python Code:
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
x = []
a = {1:'one',2:'two',3:'three',4:'four',5:'five',6:'six',7:'seven',8:'eight',9:'nine',10:'ten',
11:'eleven',12:'twelve',13:'thirteen',14:'fourteen',15:'fifteen',16:'sixteen',17:'seventeen',18:'eighteen'
,19:'nineteen',20:'twenty',30:'thirty',40:'forty',50:'fifty',60:'sixty',70:'seventy',80:'eighty',90:'ninety'}
b = 'hundred'
c = 'thousand'
d = 'and'
if n <= 20 and n >= 1:
x.append(a[n])
return x
elif n > 20 and n < 100:
if n % 10 == 0:
x.append(a[n])
return x
else:
y = str(n)
x.append(a[int(y[0] + '0')])
x.append(a[int(y[1])])
return x
elif n >= 100 and n < 1000:
if n % 100 == 0:
y = str(n)
x.append(a[int(y[0])])
x.append(b)
return x
elif n % 10 == 0:
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[1]+'0')])
return x
elif str(n)[1] == '0':
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[2])])
return x
elif str(n)[1] == '1':
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[1]+y[2])])
return x
else:
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[1]+'0')])
x.append(a[int(y[2])])
return x
else:
x.append(a[1])
x.append(c)
return x
assert number_to_words(16) == ['sixteen']
assert number_to_words(507) == ['five','hundred','and','seven']
assert number_to_words(735) == ['seven', 'hundred', 'and', 'thirty', 'five']
assert len(''.join(number_to_words(342))) == 23
assert True # use this for grading the number_to_words tests.
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
z = 0
x = range(1,n+1)
for m in x:
j = number_to_words(m)
k = len(''.join(j))
z += k
return z
assert count_letters(6) == 22
assert True # use this for grading the count_letters tests.
count_letters(1000)
assert True # use this for gradig the answer to the original question.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question.
|
14,493 | <ASSISTANT_TASK:>
Python Code:
#@title ### Install the Graph Nets library on this Colaboratory runtime { form-width: "60%", run: "auto"}
#@markdown <br>1. Connect to a local or hosted Colaboratory runtime by clicking the **Connect** button at the top-right.<br>2. Choose "Yes" below to install the Graph Nets library on the runtime machine with the correct dependencies. Note, this works both with local and hosted Colaboratory runtimes.
install_graph_nets_library = "No" #@param ["Yes", "No"]
if install_graph_nets_library.lower() == "yes":
print("Installing Graph Nets library and dependencies:")
print("Output message from command:\n")
!pip install graph_nets "dm-sonnet<2" "tensorflow_probability<0.9"
else:
print("Skipping installation of Graph Nets library")
#@title #### (Imports)
%tensorflow_version 1.x # For Google Colab only.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from graph_nets import blocks
from graph_nets import graphs
from graph_nets import modules
from graph_nets import utils_np
from graph_nets import utils_tf
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import sonnet as snt
import tensorflow as tf
# Global features for graph 0.
globals_0 = [1., 2., 3.]
# Node features for graph 0.
nodes_0 = [[10., 20., 30.], # Node 0
[11., 21., 31.], # Node 1
[12., 22., 32.], # Node 2
[13., 23., 33.], # Node 3
[14., 24., 34.]] # Node 4
# Edge features for graph 0.
edges_0 = [[100., 200.], # Edge 0
[101., 201.], # Edge 1
[102., 202.], # Edge 2
[103., 203.], # Edge 3
[104., 204.], # Edge 4
[105., 205.]] # Edge 5
# The sender and receiver nodes associated with each edge for graph 0.
senders_0 = [0, # Index of the sender node for edge 0
1, # Index of the sender node for edge 1
1, # Index of the sender node for edge 2
2, # Index of the sender node for edge 3
2, # Index of the sender node for edge 4
3] # Index of the sender node for edge 5
receivers_0 = [1, # Index of the receiver node for edge 0
2, # Index of the receiver node for edge 1
3, # Index of the receiver node for edge 2
0, # Index of the receiver node for edge 3
3, # Index of the receiver node for edge 4
4] # Index of the receiver node for edge 5
# Global features for graph 1.
globals_1 = [1001., 1002., 1003.]
# Node features for graph 1.
nodes_1 = [[1010., 1020., 1030.], # Node 0
[1011., 1021., 1031.]] # Node 1
# Edge features for graph 1.
edges_1 = [[1100., 1200.], # Edge 0
[1101., 1201.], # Edge 1
[1102., 1202.], # Edge 2
[1103., 1203.]] # Edge 3
# The sender and receiver nodes associated with each edge for graph 1.
senders_1 = [0, # Index of the sender node for edge 0
0, # Index of the sender node for edge 1
1, # Index of the sender node for edge 2
1] # Index of the sender node for edge 3
receivers_1 = [0, # Index of the receiver node for edge 0
1, # Index of the receiver node for edge 1
0, # Index of the receiver node for edge 2
0] # Index of the receiver node for edge 3
data_dict_0 = {
"globals": globals_0,
"nodes": nodes_0,
"edges": edges_0,
"senders": senders_0,
"receivers": receivers_0
}
data_dict_1 = {
"globals": globals_1,
"nodes": nodes_1,
"edges": edges_1,
"senders": senders_1,
"receivers": receivers_1
}
data_dict_list = [data_dict_0, data_dict_1]
graphs_tuple = utils_np.data_dicts_to_graphs_tuple(data_dict_list)
graphs_nx = utils_np.graphs_tuple_to_networkxs(graphs_tuple)
_, axs = plt.subplots(ncols=2, figsize=(6, 3))
for iax, (graph_nx, ax) in enumerate(zip(graphs_nx, axs)):
nx.draw(graph_nx, ax=ax)
ax.set_title("Graph {}".format(iax))
def print_graphs_tuple(graphs_tuple):
print("Shapes of `GraphsTuple`'s fields:")
print(graphs_tuple.map(lambda x: x if x is None else x.shape, fields=graphs.ALL_FIELDS))
print("\nData contained in `GraphsTuple`'s fields:")
print("globals:\n{}".format(graphs_tuple.globals))
print("nodes:\n{}".format(graphs_tuple.nodes))
print("edges:\n{}".format(graphs_tuple.edges))
print("senders:\n{}".format(graphs_tuple.senders))
print("receivers:\n{}".format(graphs_tuple.receivers))
print("n_node:\n{}".format(graphs_tuple.n_node))
print("n_edge:\n{}".format(graphs_tuple.n_edge))
print_graphs_tuple(graphs_tuple)
recovered_data_dict_list = utils_np.graphs_tuple_to_data_dicts(graphs_tuple)
# Number of nodes
n_node = 3
# Three edges connecting the nodes in a cycle
senders = [0, 1, 2] # Indices of nodes sending the edges
receivers = [1, 2, 0] # Indices of nodes receiving the edges
data_dict = {
"n_node": n_node,
"senders": senders,
"receivers": receivers,
}
graphs_tuple = utils_np.data_dicts_to_graphs_tuple([data_dict])
# Node features.
nodes = [[10.], # Node 0
[11.], # Node 1
[12.]] # Node 2
data_dict = {
"nodes": nodes,
}
graphs_tuple = utils_np.data_dicts_to_graphs_tuple([data_dict])
# We can visualize the graph using networkx.
graphs_nx = utils_np.graphs_tuple_to_networkxs(graphs_tuple)
ax = plt.figure(figsize=(3, 3)).gca()
nx.draw(graphs_nx[0], ax=ax)
_ = ax.set_title("Graph without edges")
graph_nx = nx.OrderedMultiDiGraph()
# Globals.
graph_nx.graph["features"] = np.array([0.6, 0.7, 0.8])
# Nodes.
graph_nx.add_node(0, features=np.array([0.3, 1.3]))
graph_nx.add_node(1, features=np.array([0.4, 1.4]))
graph_nx.add_node(2, features=np.array([0.5, 1.5]))
graph_nx.add_node(3, features=np.array([0.6, 1.6]))
# Edges.
graph_nx.add_edge(0, 1, features=np.array([3.6, 3.7]))
graph_nx.add_edge(2, 0, features=np.array([5.6, 5.7]))
graph_nx.add_edge(3, 0, features=np.array([6.6, 6.7]))
ax = plt.figure(figsize=(3, 3)).gca()
nx.draw(graph_nx, ax=ax)
ax.set_title("Graph")
graphs_tuple = utils_np.networkxs_to_graphs_tuple([graph_nx])
print_graphs_tuple(graphs_tuple)
#@title #### (Define functions for generating and plotting graphs)
GLOBAL_SIZE = 4
NODE_SIZE = 5
EDGE_SIZE = 6
def get_graph_data_dict(num_nodes, num_edges):
return {
"globals": np.random.rand(GLOBAL_SIZE).astype(np.float32),
"nodes": np.random.rand(num_nodes, NODE_SIZE).astype(np.float32),
"edges": np.random.rand(num_edges, EDGE_SIZE).astype(np.float32),
"senders": np.random.randint(num_nodes, size=num_edges, dtype=np.int32),
"receivers": np.random.randint(num_nodes, size=num_edges, dtype=np.int32),
}
graph_3_nodes_4_edges = get_graph_data_dict(num_nodes=3, num_edges=4)
graph_5_nodes_8_edges = get_graph_data_dict(num_nodes=5, num_edges=8)
graph_7_nodes_13_edges = get_graph_data_dict(num_nodes=7, num_edges=13)
graph_9_nodes_25_edges = get_graph_data_dict(num_nodes=9, num_edges=25)
graph_dicts = [graph_3_nodes_4_edges, graph_5_nodes_8_edges,
graph_7_nodes_13_edges, graph_9_nodes_25_edges]
def plot_graphs_tuple_np(graphs_tuple):
networkx_graphs = utils_np.graphs_tuple_to_networkxs(graphs_tuple)
num_graphs = len(networkx_graphs)
_, axes = plt.subplots(1, num_graphs, figsize=(5*num_graphs, 5))
if num_graphs == 1:
axes = axes,
for graph, ax in zip(networkx_graphs, axes):
plot_graph_networkx(graph, ax)
def plot_graph_networkx(graph, ax, pos=None):
node_labels = {node: "{:.3g}".format(data["features"][0])
for node, data in graph.nodes(data=True)
if data["features"] is not None}
edge_labels = {(sender, receiver): "{:.3g}".format(data["features"][0])
for sender, receiver, data in graph.edges(data=True)
if data["features"] is not None}
global_label = ("{:.3g}".format(graph.graph["features"][0])
if graph.graph["features"] is not None else None)
if pos is None:
pos = nx.spring_layout(graph)
nx.draw_networkx(graph, pos, ax=ax, labels=node_labels)
if edge_labels:
nx.draw_networkx_edge_labels(graph, pos, edge_labels, ax=ax)
if global_label:
plt.text(0.05, 0.95, global_label, transform=ax.transAxes)
ax.yaxis.set_visible(False)
ax.xaxis.set_visible(False)
return pos
def plot_compare_graphs(graphs_tuples, labels):
pos = None
num_graphs = len(graphs_tuples)
_, axes = plt.subplots(1, num_graphs, figsize=(5*num_graphs, 5))
if num_graphs == 1:
axes = axes,
pos = None
for name, graphs_tuple, ax in zip(labels, graphs_tuples, axes):
graph = utils_np.graphs_tuple_to_networkxs(graphs_tuple)[0]
pos = plot_graph_networkx(graph, ax, pos=pos)
ax.set_title(name)
tf.reset_default_graph()
graphs_tuple_tf = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
with tf.Session() as sess:
graphs_tuple_np = sess.run(graphs_tuple_tf)
plot_graphs_tuple_np(graphs_tuple_np)
# If the GraphsTuple has None's we need to make use of `utils_tf.make_runnable_in_session`.
tf.reset_default_graph()
graphs_tuple_tf = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
# Removing the edges from a graph.
graph_with_nones = graphs_tuple_tf.replace(
edges=None, senders=None, receivers=None, n_edge=graphs_tuple_tf.n_edge*0)
runnable_in_session_graph = utils_tf.make_runnable_in_session(graph_with_nones)
with tf.Session() as sess:
graphs_tuple_np = sess.run(runnable_in_session_graph)
plot_graphs_tuple_np(graphs_tuple_np)
tf.reset_default_graph()
# Create a placeholder using the first graph in the list as template.
graphs_tuple_ph = utils_tf.placeholders_from_data_dicts(graph_dicts[0:1])
with tf.Session() as sess:
# Feeding a batch of graphs with different sizes, and different
# numbers of nodes and edges through the placeholder.
feed_dict = utils_tf.get_feed_dict(
graphs_tuple_ph, utils_np.data_dicts_to_graphs_tuple(graph_dicts[1:]))
graphs_tuple_np = sess.run(graphs_tuple_ph, feed_dict)
plot_graphs_tuple_np(graphs_tuple_np)
# If the GraphsTuple has None's we need to make use of `utils_tf.make_runnable_in_session`.
tf.reset_default_graph()
graphs_tuple_tf = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
first_graph_tf = utils_tf.get_graph(graphs_tuple_tf, 0)
three_graphs_tf = utils_tf.get_graph(graphs_tuple_tf, slice(1, 4))
with tf.Session() as sess:
first_graph_np = sess.run(first_graph_tf)
three_graphs_np = sess.run(three_graphs_tf)
plot_graphs_tuple_np(first_graph_np)
plot_graphs_tuple_np(three_graphs_np)
# Concatenating along the batch dimension
tf.reset_default_graph()
graphs_tuple_1_tf = utils_tf.data_dicts_to_graphs_tuple(graph_dicts[0:1])
graphs_tuple_2_tf = utils_tf.data_dicts_to_graphs_tuple(graph_dicts[1:])
graphs_tuple_tf = utils_tf.concat([graphs_tuple_1_tf, graphs_tuple_2_tf], axis=0)
with tf.Session() as sess:
graphs_tuple_np = sess.run(graphs_tuple_tf)
plot_graphs_tuple_np(graphs_tuple_np)
tf.reset_default_graph()
OUTPUT_EDGE_SIZE = 10
OUTPUT_NODE_SIZE = 11
OUTPUT_GLOBAL_SIZE = 12
graph_network = modules.GraphNetwork(
edge_model_fn=lambda: snt.Linear(output_size=OUTPUT_EDGE_SIZE),
node_model_fn=lambda: snt.Linear(output_size=OUTPUT_NODE_SIZE),
global_model_fn=lambda: snt.Linear(output_size=OUTPUT_GLOBAL_SIZE))
input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
output_graphs = graph_network(input_graphs)
print("Output edges size: {}".format(output_graphs.edges.shape[-1])) # Equal to OUTPUT_EDGE_SIZE
print("Output nodes size: {}".format(output_graphs.nodes.shape[-1])) # Equal to OUTPUT_NODE_SIZE
print("Output globals size: {}".format(output_graphs.globals.shape[-1])) # Equal to OUTPUT_GLOBAL_SIZE
tf.reset_default_graph()
input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
graph_network = modules.GraphNetwork(
edge_model_fn=lambda: snt.Linear(output_size=EDGE_SIZE),
node_model_fn=lambda: snt.Linear(output_size=NODE_SIZE),
global_model_fn=lambda: snt.Linear(output_size=GLOBAL_SIZE))
num_recurrent_passes = 3
previous_graphs = input_graphs
for unused_pass in range(num_recurrent_passes):
previous_graphs = graph_network(previous_graphs)
output_graphs = previous_graphs
def zeros_graph(sample_graph, edge_size, node_size, global_size):
zeros_graphs = sample_graph.replace(nodes=None, edges=None, globals=None)
zeros_graphs = utils_tf.set_zero_edge_features(zeros_graphs, edge_size)
zeros_graphs = utils_tf.set_zero_node_features(zeros_graphs, node_size)
zeros_graphs = utils_tf.set_zero_global_features(zeros_graphs, global_size)
return zeros_graphs
tf.reset_default_graph()
graph_network = modules.GraphNetwork(
edge_model_fn=lambda: snt.Linear(output_size=OUTPUT_EDGE_SIZE),
node_model_fn=lambda: snt.Linear(output_size=OUTPUT_NODE_SIZE),
global_model_fn=lambda: snt.Linear(output_size=OUTPUT_GLOBAL_SIZE))
input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
initial_state = zeros_graph(
input_graphs, OUTPUT_EDGE_SIZE, OUTPUT_NODE_SIZE, OUTPUT_GLOBAL_SIZE)
num_recurrent_passes = 3
current_state = initial_state
for unused_pass in range(num_recurrent_passes):
input_and_state_graphs = utils_tf.concat(
[input_graphs, current_state], axis=1)
current_state = graph_network(input_and_state_graphs)
output_graphs = current_state
tf.reset_default_graph()
graphs_tuple = utils_tf.data_dicts_to_graphs_tuple([data_dict_0])
updated_broadcast_globals_to_nodes = graphs_tuple.replace(
nodes=blocks.broadcast_globals_to_nodes(graphs_tuple))
updated_broadcast_globals_to_edges = graphs_tuple.replace(
edges=blocks.broadcast_globals_to_edges(graphs_tuple))
updated_broadcast_sender_nodes_to_edges = graphs_tuple.replace(
edges=blocks.broadcast_sender_nodes_to_edges(graphs_tuple))
updated_broadcast_receiver_nodes_to_edges = graphs_tuple.replace(
edges=blocks.broadcast_receiver_nodes_to_edges(graphs_tuple))
with tf.Session() as sess:
output_graphs = sess.run([
graphs_tuple,
updated_broadcast_globals_to_nodes,
updated_broadcast_globals_to_edges,
updated_broadcast_sender_nodes_to_edges,
updated_broadcast_receiver_nodes_to_edges])
plot_compare_graphs(output_graphs, labels=[
"Input graph",
"blocks.broadcast_globals_to_nodes",
"blocks.broadcast_globals_to_edges",
"blocks.broadcast_sender_nodes_to_edges",
"blocks.broadcast_receiver_nodes_to_edges"])
tf.reset_default_graph()
graphs_tuple = utils_tf.data_dicts_to_graphs_tuple([data_dict_0])
updated_graphs_tuple = graphs_tuple.replace(
edges=(graphs_tuple.edges[:, :1] +
blocks.broadcast_receiver_nodes_to_edges(graphs_tuple)[:, :1] +
blocks.broadcast_sender_nodes_to_edges(graphs_tuple)[:, :1] +
blocks.broadcast_globals_to_edges(graphs_tuple)[:, :1]))
with tf.Session() as sess:
output_graphs = sess.run([
graphs_tuple,
updated_graphs_tuple])
plot_compare_graphs(output_graphs, labels=[
"Input graph",
"Updated graph"])
tf.reset_default_graph()
graphs_tuple = utils_tf.data_dicts_to_graphs_tuple([data_dict_0])
reducer = tf.unsorted_segment_sum
updated_edges_to_globals = graphs_tuple.replace(
globals=blocks.EdgesToGlobalsAggregator(reducer=reducer)(graphs_tuple))
updated_nodes_to_globals = graphs_tuple.replace(
globals=blocks.NodesToGlobalsAggregator(reducer=reducer)(graphs_tuple))
updated_sent_edges_to_nodes = graphs_tuple.replace(
nodes=blocks.SentEdgesToNodesAggregator(reducer=reducer)(graphs_tuple))
updated_received_edges_to_nodes = graphs_tuple.replace(
nodes=blocks.ReceivedEdgesToNodesAggregator(reducer=reducer)(graphs_tuple))
with tf.Session() as sess:
output_graphs = sess.run([
graphs_tuple,
updated_edges_to_globals,
updated_nodes_to_globals,
updated_sent_edges_to_nodes,
updated_received_edges_to_nodes])
plot_compare_graphs(output_graphs, labels=[
"Input graph",
"blocks.EdgesToGlobalsAggregator",
"blocks.NodesToGlobalsAggregator",
"blocks.SentEdgesToNodesAggregator",
"blocks.ReceivedEdgesToNodesAggregator"])
tf.reset_default_graph()
edge_block = blocks.EdgeBlock(
edge_model_fn=lambda: snt.Linear(output_size=10))
input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
output_graphs = edge_block(input_graphs)
print(("Output edges size: {}".format(output_graphs.edges.shape[-1])))
tf.reset_default_graph()
node_block = blocks.NodeBlock(
node_model_fn=lambda: snt.Linear(output_size=15))
input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
output_graphs = node_block(input_graphs)
print(("Output nodes size: {}".format(output_graphs.nodes.shape[-1])))
tf.reset_default_graph()
global_block = blocks.GlobalBlock(
global_model_fn=lambda: snt.Linear(output_size=20))
input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
output_graphs = global_block(input_graphs)
print(("Output globals size: {}".format(output_graphs.globals.shape[-1])))
tf.reset_default_graph()
graph_network = modules.GraphNetwork(
edge_model_fn=lambda: snt.Linear(output_size=10),
node_model_fn=lambda: snt.Linear(output_size=15),
global_model_fn=lambda: snt.Linear(output_size=20))
input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)
output_graphs = graph_network(input_graphs)
for var in graph_network.variables:
print(var)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install dependencies locally
Step2: Tutorial of the Graph Nets library
Step3: How to represent graphs as a graphs.GraphsTuple
Step4: Visualize the graphs using networkx
Step5: Print the GraphsTuple fields
Step6: Back to data dicts
Step7: Ways to represent different data sources with a graph
Step8: Set (ie. graph without edges)
Step9: Creating a GraphsTuple from a networkx graph
Step10: Working with tensor GraphsTuple's
Step11: Creating a constant tensor GraphsTuple from data dicts
Step12: GraphsTuple placeholders
Step13: A similar utility is provided to work with networkx graphs
Step14: Concatenating multiple GraphsTuple instances
Step15: Similarly, we can concatenate along feature dimensions, assuming all of the batches to be concatenates have the same graph structure/connectivity.
Step16: Feeding a GraphsTuple to a Graph Net
Step17: Connecting a GraphNetwork recurrently
Step18: Alternatively, we can process the input graph multiple times with a graph state that gets updated recurrently.
Step19: Similarly, recurrent modules with gating, such as an LSTM or GRU, can be applied on the edges, nodes, and globals of the state and input graphs separately.
Step20: We can easily use broadcasters to, for example, set the value of each edge to be the sum of the first feature element of
Step21: Aggregators
Step22: blocks.EdgeBlock
Step23: blocks.NodeBlock
Step24: blocks.GlobalBlock
Step25: Block compositionality
|
14,494 | <ASSISTANT_TASK:>
Python Code:
from halomod import AngularCF
import halomod
halomod.__version__
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
acf = AngularCF(z=0.475, zmin=0.45, zmax=0.5)
plt.plot(acf.theta * 180/np.pi, acf.angular_corr_gal)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r"$\theta$ / deg")
plt.ylabel('Angular CF')
acf.mean_tracer_den * 1e4
acf.hod_model = 'Zheng05'
acf.hod_params = {'M_min': 12.98, 'M_0':-10, 'M_1':14.09, 'sig_logm':0.21, 'alpha':1.57}
acf.mean_tracer_den * 1e4, acf.bias_effective_tracer, acf.mass_effective
plt.plot(acf.theta * 180/np.pi, acf.angular_corr_gal)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r"$\theta$ / deg")
plt.ylabel('Angular CF')
r = np.logspace(-3, 2.1, 500)
plt.plot(r, acf.corr_auto_tracer_fnc(r))
plt.xscale('log')
plt.yscale('log')
from halomod.integrate_corr import angular_corr_gal
angular_corr_gal(
theta=np.array([1e-3]),
xi = acf.corr_auto_tracer_fnc,
p1=lambda x : 0.7*np.ones_like(x)/(Planck15.comoving_distance(0.5).value - Planck15.comoving_distance(0.45).value),
p_of_z=False,
zmin=0.45,
zmax=0.5,
logu_min=-6,
logu_max=2.1,
unum=1000,
znum=500
)
from astropy.cosmology import Planck15
Planck15.comoving_distance(0.45)
Planck15.comoving_distance(0.5)
np.linspace(0,1,2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The problem illustrated
Step2: Note that this is pretty much 1000x times what Blake+ got, but does have a similar shape.
Step3: OK, this is awful. Let's try specify the HOD more accurately
Step4: This is much closer. Now let's try the ACF
Step5: It's even worse than before!
Step6: This is of a similar magnitude to that found in Blake.
|
14,495 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
from sherpa import data
from sherpa.astro import data as astrodata
from sherpa import plot
from sherpa.astro import plot as astroplot
x1 = np.asarray([100, 200, 600, 1200])
y1 = np.asarray([2000, 2100, 1400, 3050])
d1 = data.Data1D('oned', x1, y1)
plot1 = plot.DataPlot()
plot1.prepare(d1)
plot1.plot()
plot1.plot(xlog=True, linestyle='dotted', marker='*', markerfacecolor='orange', markersize=20, color='black')
plot.DataPlot.plot_prefs
dy1 = np.asarray([100, 50, 200, 300])
d2 = data.Data1D('errors', x1, y1, dy1)
plot2 = plot.DataPlot()
plot2.prepare(d2)
plot2.plot()
plot2.plot(capsize=4)
xlo2 = np.asarray([0.1, 0.2, 0.4, 0.8, 1.5])
xhi2 = np.asarray([0.2, 0.4, 0.6, 1.1, 2.0])
y2 = np.asarray([10, 12, 3, 0, 4])
data3 = data.Data1DInt('int1', xlo2, xhi2, y2)
plot3 = plot.DataHistogramPlot()
plot3.prepare(data3)
plot3.plot(xlog=True)
plot3.plot(xlog=True, linestyle='solid')
plot.DataHistogramPlot.histo_prefs
from sherpa.stats import Chi2DataVar
plot4 = plot.DataHistogramPlot()
plot4.prepare(data3, stat=Chi2DataVar())
plot4.plot(linestyle='dashed', marker=None, ecolor='orange', capsize=4)
energies = np.arange(0.1, 11, 0.01)
elo = energies[:-1]
ehi = energies[1:]
arf = 100 * np.ones_like(elo)
arf[elo < 4] = 200
darf = astrodata.DataARF('arf', elo, ehi, arf)
aplot = astroplot.ARFPlot()
aplot.prepare(darf)
splot = plot.SplitPlot()
splot.addplot(aplot)
splot.addplot(aplot, xlog=True)
splot.plot_prefs
splot.reset()
splot.plot_prefs['hspace'] = 0.6
splot.addplot(aplot)
splot.addplot(aplot, xlog=True)
chans = np.arange(1, len(elo) + 1, dtype=np.int16)
counts = 5 + 5 * np.sin(elo * 4)
counts = counts.astype(np.int)
dpha = astrodata.DataPHA('pha', chans, counts)
pplot = astroplot.DataPHAPlot()
pplot.prepare(dpha)
pplot.plot()
dpha.set_arf(darf)
dpha.set_analysis('energy')
pplot.prepare(dpha)
pplot.plot(linestyle='solid', marker=None)
dpha.group_bins(20)
pplot.prepare(dpha, stat=Chi2DataVar())
pplot.plot(xerrorbars=True, yerrorbars=True)
pplot.overplot(linestyle='solid', alpha=0.5, marker=None)
from sherpa.models.basic import Sin
from sherpa.astro.instrument import Response1D
mdl = Sin()
mdl.period = 4
# Note that the response information - in this case the ARF and channel-to-energy mapping - needs
# to be applied to the model, which is done by the Response1D class in this example.
#
rsp = Response1D(dpha)
full_model = rsp(mdl)
print(full_model)
mplot = astroplot.ModelHistogram()
mplot.prepare(dpha, full_model)
mplot.plot()
mplot2 = astroplot.ModelPHAHistogram()
mplot2.prepare(dpha, full_model)
mplot.plot()
mplot2.overplot()
np.random.seed(1273)
# I've never used the Wald distribution before, so let's see how it looks...
#
z1 = np.random.wald(1000, 20, size=1000)
z2 = np.random.wald(1000, 2000, size=1000)
splot = plot.ScatterPlot()
splot.prepare(z1, z2, xlabel='z$_1$', ylabel='z$_2$', name='(z$_1$, z$_2$)')
splot.plot(xlog=True)
cplot = plot.CDFPlot()
cplot.prepare(z1, xlabel='z', name='z')
cplot.plot(xlog=True)
cplot.prepare(z2)
cplot.overplot()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One dimensional data plots
Step2: We can have some fun with the plot options (these are a mixture of generic options, such as xlog, and ones specific to the plotting backend - which here is matplotlib - such as marker).
Step3: The plot object contains the preferences - here we look at the default plot settings. Note that some plot types have different - and even multiple - preference settings.
Step4: Error bars - here on the dependent axis - can be displayed too
Step5: Histogram-style data (with low and high edges) are handled similarly
Step6: If we want to see the data drawn "like a histogram" then we need to set the linestyle attribute
Step7: The histogram-style plots are an example of a plot using a different name for the preference settings, in this case histo_prefs
Step8: Previously we explicitly set the error values, but we can also use one of the chi-square statistics to come up with error values. In this case it's just the square-root of the data value (so, for $x \sim 1$ bin, we have an error of 0)
Step9: PHA-related plots
Step10: The preferences for the split plot has a different "flavor" to the other types
Step11: It does allow us to tweak the plot layout
Step12: A PHA, which matches the ARF, can be created (with a sinusoidal pattern just to show something different)
Step13: Adding the ARF to the data allows us to change to energy units
Step14: Grouping the data - in this case in 20-channel groups - allows us to check the "x errorbar" handling (the 'errors' here just indicate the bin width, and so match the overplotted orange line)
Step15: We can see how a model looks for this dataset - in this case a simple sinusoidal model which is multiplied by the ARF (shown earlier), and so is not going to match the data.
Step16: Note that the ModelHistogram class does not use the grouping of the PHA dataset, so it shows the model evaluated per channel
Step17: The discontinuity at 4 keV is because of the step function in the ARF (200 cm$^2$ below this energy and 100 cm$^2$ above it).
Step18: Object-less plots
Step19: and cumulative plots
|
14,496 | <ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
import geopandas as gpd
import pandas as pd
# Load a GeoDataFrame containing regions in Ghana
regions = gpd.read_file("../input/geospatial-learn-course-data/ghana/ghana/Regions/Map_of_Regions_in_Ghana.shp")
print(regions.crs)
# Create a DataFrame with health facilities in Ghana
facilities_df = pd.read_csv("../input/geospatial-learn-course-data/ghana/ghana/health_facilities.csv")
# Convert the DataFrame to a GeoDataFrame
facilities = gpd.GeoDataFrame(facilities_df, geometry=gpd.points_from_xy(facilities_df.Longitude, facilities_df.Latitude))
# Set the coordinate reference system (CRS) to EPSG 4326
facilities.crs = {'init': 'epsg:4326'}
# View the first five rows of the GeoDataFrame
facilities.head()
# Create a map
ax = regions.plot(figsize=(8,8), color='whitesmoke', linestyle=':', edgecolor='black')
facilities.to_crs(epsg=32630).plot(markersize=1, ax=ax)
# The "Latitude" and "Longitude" columns are unchanged
facilities.to_crs(epsg=32630).head()
# Change the CRS to EPSG 4326
regions.to_crs("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs").head()
# Get the x-coordinate of each point
facilities.geometry.head().x
# Calculate the area (in square meters) of each polygon in the GeoDataFrame
regions.loc[:, "AREA"] = regions.geometry.area / 10**6
print("Area of Ghana: {} square kilometers".format(regions.AREA.sum()))
print("CRS:", regions.crs)
regions.head()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting the CRS
Step2: How do you interpret that?
Step3: In the code cell above, to create a GeoDataFrame from a CSV file, we needed to use both Pandas and GeoPandas
Step4: The to_crs() method modifies only the "geometry" column
Step5: In case the EPSG code is not available in GeoPandas, we can change the CRS with what's known as the "proj4 string" of the CRS. For instance, the proj4 string to convert to latitude/longitude coordinates is as follows
Step6: Attributes of geometric objects
Step7: And, you can get the length of a LineString from the length attribute.
|
14,497 | <ASSISTANT_TASK:>
Python Code:
from edward.models import Bernoulli, Beta, Empirical, Uniform
N = 100
def build_fair_dataset(N):
pheads = tf.constant(0.5)
c = Bernoulli(probs=pheads, sample_shape = N)
return sess.run([pheads, c])
def build_unfair_dataset(N):
pheads = tf.constant(0.05)
c = Bernoulli(probs=pheads, sample_shape = N)
return sess.run([pheads, c])
def build_dataset(N):
pheads = Uniform(low=0.0,high=1.0)
c = Bernoulli(probs=pheads, sample_shape = N)
return sess.run([pheads, c])
x = tf.range(-0.2, 1.2, 0.001)
plt.plot(*sess.run([x, Uniform(low=0.0,high=1.0).prob(x)]));
plt.ylim((-0.2,1.2))
plt.title('Uniform distribution')
pheads_true,c_train = build_fair_dataset(N)
pheads_true
c_train
sum(c_train == 0)
sum(c_train == 1)
# declaring priors
pheads_fair = Beta(concentration1=1000.0, concentration0=1000.0) # blue
pheads_unfair = Beta(concentration1=0.1, concentration0=0.1) # green
pheads_unknown = Beta(concentration1 = 1.0, concentration0=1.0)
x = tf.range(0.0,1.0,0.001)
plt.plot(*sess.run([x, pheads_fair.prob(x)]));
plt.plot(*sess.run([x, pheads_unfair.prob(x)]));
plt.plot(*sess.run([x, pheads_unknown.prob(x)]));
plt.axvline(x=pheads_true);
# Forward model
pheads = pheads_unknown
c = Bernoulli(probs=pheads, sample_shape=N)
# inference
pheads_cond = ed.complete_conditional(pheads)
pheads_post = ed.copy(pheads_cond, {c: c_train})
sess.run({key:val for key, val in six.iteritems(pheads_post.parameters) if isinstance(val,tf.Tensor)})
# criticism
mean, stddev = sess.run([pheads_post.mean(),pheads_post.stddev()])
print(" exact posterior mean: " + str(mean))
print("exact posterior std" + str(stddev))
x = tf.range(0.0,1.0,0.001)
plt.plot(*sess.run([x,pheads.prob(x)]));
plt.plot(*sess.run([x,pheads_post.prob(x)]));
plt.axvline(x=pheads_true);
pheads_true,c_train = build_fair_dataset(100)
fig = plt.figure()
ax = plt.axes(xlim=(-0.05,1.05), ylim=(-1.0,11.0))
def go(pheads_prior, sample_shape, c_train, recursion=1):
# model
c = Bernoulli(probs=pheads_prior,
sample_shape=sample_shape)
# INFERENCE
pheads_cond = ed.complete_conditional(pheads_prior)
pheads_post = ed.copy(pheads_cond,{c:c_train[:sample_shape]})
# CRITICISM
ax.plot(*sess.run([x,pheads_post.prob(x)]));
print("finished recursion "+str(recursion))
recursion += 1
# RECURSION
if len(c_train[sample_shape:]) >= sample_shape:
go(pheads_post, sample_shape, c_train[sample_shape:],recursion)
pheads_prior = Beta(concentration1=0.1, concentration0=0.1)
ax.plot(*sess.run([x, pheads_prior.prob(x)]));
plt.axvline(x=pheads_true);
go(pheads_prior,33,c_train)
# BACKWARD MODEL
T = 10000
q_pheads = Empirical(params=tf.Variable(tf.ones([T])*0.5))
# INFERENCE
proposal_pheads = Beta(concentration1=1.0,
concentration0=1.0)
inference = ed.MetropolisHastings(latent_vars = {pheads: q_pheads},
proposal_vars={pheads: proposal_pheads},
data={c:c_train})
inference.run()
# criticism
mean, stddev = sess.run([q_pheads.mean(),q_pheads.stddev()])
print("inferred posterior mean:")
print(mean)
print("inferred posterior std:")
print(stddev)
plt.plot(q_pheads.params.eval());
plt.axhline(y=pheads_true)
def lags(x):
mean = tf.reduce_mean(x)
var = tf.cast(tf.size(x) - 1, tf.float32) *tf.reduce_mean(tf.square(x-mean))
ret = tf.map_fn(lambda k:tf.cond(tf.equal(k,0),
lambda:var,
lambda:tf.reduce_sum((x[:-k] - mean) * (x[k:]-mean))),
tf.range(0,tf.size(x)),
dtype=tf.float32)
return ret / var
plt.plot(lags(q_pheads.params).eval());
x = tf.range(0.0,1.0,0.001)
plt.plot(*sess.run([x,pheads.prob(x)]));
plt.plot(*sess.run([x,pheads_cond.prob(x)],
{c:c_train}));
plt.hist(q_pheads.params.eval(),
bins=100,range=(0.0,1.0),
normed = True);
plt.axvline(x=pheads_true);
# BACKWARD MODEL
T = 10000
q_pheads = Empirical(params=tf.Variable(tf.ones([T])*0.5))
# INFERENCE
inference = ed.Gibbs(latent_vars={pheads: q_pheads},
data={c: c_train})
inference.run()
# CRITICISM
mean, stddev = sess.run([q_pheads.mean(),q_pheads.stddev()])
print('Inferred posterior mean:')
print(mean)
print('Inferred posterior std:')
print(stddev)
plt.plot(q_pheads.params.eval());
plt.axhline(y=pheads_true)
x = tf.range(0.0,1.0,0.001)
plt.plot(*sess.run([x,pheads.prob(x)]));
plt.plot(*sess.run([x,pheads_cond.prob(x)],
{c:c_train}));
plt.hist(q_pheads.params.eval(),
bins=100,range=(0.0,1.0),
normed = True);
plt.axvline(x=pheads_true);
plt.plot(lags(q_pheads.params).eval());
# BACKWARD MODEL
T = 10000 # number of empirical samples
q_pheads = Empirical(params=tf.Variable(tf.ones([T])*.5))
# INFERENCE
inference = ed.HMC(latent_vars={pheads: q_pheads},
data = {c: c_train})
inference.run(step_size=1.0 / N, n_steps=20)
# criticism
mean, stddev = sess.run([q_pheads.mean(), q_pheads.stddev()])
print("Inferred posterior mean:")
print(mean)
print("Inferred posterior stddev")
print(stddev)
plt.plot(q_pheads.params.eval())
plt.axhline(y=pheads_true)
plt.plot(lags(q_pheads.params).eval())
x = tf.range(0.0, 1.0, 0.001)
plt.plot(*sess.run([x, pheads.prob(x)]))
plt.plot(*sess.run([x, pheads_cond.prob(x)],
{c:c_train}))
plt.hist(q_pheads.params.eval(),
bins=100, range=(0.0, 1.0),
normed=True)
plt.axvline(x=pheads_true)
# BACKWARD MODEL
q_pheads_concentration1 = tf.nn.softplus(tf.Variable(51+ tf.random_normal([])))
q_pheads_concentration0 = tf.nn.softplus(tf.Variable(51+ tf.random_normal([])))
q_pheads = Beta(concentration1=q_pheads_concentration1,
concentration0=q_pheads_concentration0)
x = tf.range(-5.0, 5.0, 0.001)
plt.plot(*sess.run([x,tf.nn.softplus(x)]))
# inference
inference = ed.KLqp(latent_vars={pheads: q_pheads},
data={c:c_train})
inference.run(n_samples=20, n_iter=1000)
sess.run({key: val for key, val in six.iteritems(q_pheads.parameters) if isinstance(val,tf.Tensor)})
plt.plot(*sess.run([x,pheads.prob(x)]))
plt.plot(*sess.run([x, pheads_cond.prob(x)],
{c:c_train}))
plt.plot(*sess.run([x, q_pheads.prob(x)]))
plt.axvline(x=pheads_true)
# criticism
mean, stddev = sess.run([q_pheads.mean(),q_pheads.stddev()])
print("Inferred posterior mean:")
print(mean)
print("Inferred posterior stddev")
print(stddev)
## A/B/... Testing
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: inference
Step2: exact solution
Step3: RECURSIVE INFERENCE
Step4: approximate inference
Step5: MCMC
Step6: MCMC
Step7: variational inference (VI)
|
14,498 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from scipy.stats import logistic
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image # Esto es para desplegar imágenes en la libreta
def logistica(z):
Calcula la función logística para cada elemento de z
@param z: un ndarray
@return: un ndarray de las mismas dimensiones que z
# Introduce código aqui (una linea de código)
#---------------------------------------------------
return logistic.cdf(z)
#---------------------------------------------------
# prueba que efectivamente funciona la función implementada
# si el assert es falso regresa un error de aserción (el testunit de los pobres)
assert (np.abs(logistica(np.array([-1, 0, 1])) - np.array([ 0.26894142, 0.5, 0.73105858]))).sum() < 1e-6
z = np.linspace(-5, 5, 100)
print z
with plt.xkcd():
plt.plot( z, logistica(z))
plt.title(u'Función logística', fontsize=20)
plt.xlabel(r'$z$', fontsize=20)
plt.ylabel(r'$\frac{1}{1 + \exp(-z)}$', fontsize=26)
def costo(theta, x, y):
Calcula el costo de una theta dada para el conjunto dee entrenamiento dado por y y x
@param theta: un ndarray de dimensión (n + 1, 1)
@param x: un ndarray de dimensión (T, n + 1) donde la primer columna son puros unos
@param y: un ndarray de dimensión (T, 1) donde cada entrada es 1.0 o 0.0
@return: un flotante con el costo
T = x.shape[0]
z = logistica(x.dot(theta))
#------------------------------------------------------------------------
# Agregua aqui tu código
return -np.sum( y*np.log(z) + (1 - y)*np.log(1 - z ))/T
#------------------------------------------------------------------------
# Otra vez el testunit del pobre (ya lo calcule yo, pero puedes hacerlo a mano para estar seguro)
theta = np.ones((2,1))
x = np.array([[1, 10],
[1, -5]])
y1 = np.array([[1],
[0]])
y2 = np.array([[0],
[1]])
y3 = np.array([[0],
[0]])
y4 = np.array([[1],
[1]])
assert abs(costo(theta, x, y1) - 0.01) < 1e-2
assert abs(costo(theta, x, y2) - 7.5) < 1e-2
assert abs(costo(theta, x, y3) - 5.5) < 1e-2
assert abs(costo(theta, x, y4) - 2.0) < 1e-2
def gradiente(theta, x, y):
Calcula el gradiente del costo de la regrasión logística, para una theta, conociendo un conjunto de aprendizaje.
@param theta: un ndarray de dimensión (n + 1, 1)
@param x: un ndarray de dimensión (T, n + 1) donde la primer columna son puros unos
@param y: un ndarray de dimensión (T, 1) donde cada entrada es 1.0 o 0.0
@return: un ndarray de mismas dimensiones que theta
T = x.shape[0]
#------------------------------------------------------------------------
# Agregua aqui tu código
G = logistica(x.dot(theta))
result = (1./T)*(x.T.dot(G - y))
return result
#------------------------------------------------------------------------
# Otra vez el testunit del pobre (ya lo calcule yo, pero puedes hacerlo a mano para estar seguro)
theta = np.ones((2, 1))
x = np.array([[1, 10],
[1, -5]])
y1 = np.array([[1],
[0]])
y2 = np.array([[0],
[1]])
y3 = np.array([[0],
[0]])
y4 = np.array([[1],
[1]])
assert abs(0.00898475 - gradiente(theta, x, y1)[0]) < 1e-4
assert abs(7.45495097 - gradiente(theta, x, y2)[1]) < 1e-4
assert abs(4.95495097 - gradiente(theta, x, y3)[1]) < 1e-4
assert abs(-0.49101525 - gradiente(theta, x, y4)[0]) < 1e-4
datos = np.loadtxt('admision.txt', comments='%', delimiter=',')
x, y = datos[:,0:-1], datos[:,-1:]
x = np.c_[np.ones((x.shape[0], 1)), x]
plt.plot(x[y.ravel() == 1, 1], x[y.ravel() == 1, 2], 'sr', label='aceptados')
plt.plot(x[y.ravel() == 0, 1], x[y.ravel() == 0, 2], 'ob', label='rechazados')
plt.title(u'Ejemplo sintético para regresión logística')
plt.xlabel(u'Calificación del primer examen')
plt.ylabel(u'Calificación del segundo examen')
plt.axis([20, 100, 20, 100])
plt.legend(loc=0)
def descenso_rl_lotes(x, y, alpha, epsilon=1e-4, max_iter=int(1e4), costos=False):
Descenso de gradiente por lotes para resolver el problema de regresión logística con un conjunto de aprendizaje
@param x: un ndarray de dimensión (T, n + 1) donde la primer columna son puros unos
@param y: un ndarray de dimensión (T, 1) donde cada entrada es 1.0 o 0.0
@param alpha: Un flotante (típicamente pequeño) con la tasa de aprendizaje
@param epsilon: Un flotante pequeño como criterio de paro. Por default 1e-4
@param max_iter: Máximo numero de iteraciones. Por default 1e4
@param costos: Un booleano para saber si calculamos el historial de costos o no
@return: theta, costo_hist donde costo es ndarray de dimensión (n + 1, 1) y costo_hist es un
ndarray de dimensión (max_iter,) con el costo en cada iteración si costos == True, si no
regresa None
T, n = x.shape[0], x.shape[1] - 1
theta = np.zeros((n + 1, 1))
costo_hist = np.zeros(max_iter) if costos else None
if costos:
for iter in xrange(max_iter):
theta += -alpha*gradiente(theta,x, y)
if np.linalg.norm(theta) <= epsilon:
return theta, costo_hist
costo_hist[iter] = costo(theta, x, y)
else:
for iter in xrange(max_iter):
theta += -alpha*gradiente(theta,x, y)
if np.linalg.norm(theta) <= epsilon:
return theta, costo_hist
#--------------------------------------------------------------
# Agregar aqui tu código
#
# Recuerda utilizar las funciones que ya has desarrollado
#--------------------------------------------------------------
return theta, costo_hist
alpha = 1e-4
mi = 500
_, costo_hist = descenso_rl_lotes(x, y, alpha, epsilon=1e-4, max_iter=mi, costos=True)
plt.plot(np.arange(mi), costo_hist)
plt.title(r'Evolucion del costo en las primeras iteraciones con $\alpha$ = ' + str(alpha))
plt.xlabel('iteraciones')
plt.ylabel('costo')
theta_lotes, _ = descenso_rl_lotes(x, y, alpha, max_iter = int(1e6))
print theta_lotes
print costo(theta_lotes, x, y)
def predictor(theta, x):
Predice los valores de y_hat (que solo pueden ser 0 o 1), utilizando el criterio MAP.
@param theta: un ndarray de dimensión (n + 1, 1)
@param x: un ndarray de dimensión (T, n + 1) donde la primer columna son puros unos
@return: y_hat un ndarray de dimensión (T, 1) donde cada entrada es 1.0 o 0.0
#-------------------------------------------------------------------------------------
# Agrega aqui tu código sin utilizar la función logística
return np.where(x.dot(theta) > 0, 1, 0)
#--------------------------------------------------------------------------------------
x1_frontera = np.array([20, 100]) #Los valores mínimo y máximo que tenemos en la gráfica de puntos
x2_frontera = -(theta_lotes[0] / theta_lotes[2]) - (theta_lotes[1] / theta_lotes[2]) * x1_frontera
plt.plot(x[y.ravel() == 1, 1], x[y.ravel() == 1, 2], 'sr', label='aceptados')
plt.plot(x[y.ravel() == 0, 1], x[y.ravel() == 0, 2], 'ob', label='rechazados')
plt.plot(x1_frontera, x2_frontera, 'm')
plt.title(u'Ejemplo sintético para regresión logística')
plt.xlabel(u'Calificación del primer examen')
plt.ylabel(u'Calificación del segundo examen')
plt.axis([20, 100, 20, 100])
plt.legend(loc=0)
Image(filename='ejemplo_logistica_1.png')
from scipy.optimize import minimize
minimize?
theta0 = 1e-2 * np.random.rand(x.shape[1])
print theta0
funcion = lambda theta, x, y: costo(theta.reshape(-1,1), x, y)
jacobiano = lambda theta, x, y: gradiente(theta.reshape(-1,1), x, y).ravel()
res = minimize(x0=theta0,
fun=funcion,
jac=jacobiano,
args = (x, y),
method='BFGS',
options= {'maxiter': 400, 'disp': True})
print res
theta0 = np.zeros(x.shape[1])
print theta0
funcion = lambda theta, x, y: costo(theta.reshape(-1,1), x, y)
jacobiano = lambda theta, x, y: gradiente(theta.reshape(-1,1), x, y).ravel()
res = minimize(x0=theta0,
fun=funcion,
args = (x, y),
method='Nelder-Mead',
options= {'maxiter': 400, 'disp': True})
print res
theta_NM = res.x.reshape(-1,1)
x1_frontera = np.array([20, 100]) #Los valores mínimo y máximo que tenemos en la gráfica de puntos
x2_frontera = -(theta_NM[0] / theta_NM[2]) - (theta_NM[1] / theta_NM[2]) * x1_frontera
plt.plot(x[y.ravel() == 1, 1], x[y.ravel() == 1, 2], 'sr', label='aceptados')
plt.plot(x[y.ravel() == 0, 1], x[y.ravel() == 0, 2], 'ob', label='rechazados')
plt.plot(x1_frontera, x2_frontera, 'm')
plt.title(u'Ejemplo sintético para regresión logística')
plt.xlabel(u'Calificación del primer examen')
plt.ylabel(u'Calificación del segundo examen')
plt.axis([20, 100, 20, 100])
plt.legend(loc=0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1. Función logística, función de costo y gradiente de la función de costo
Step3: Para probar la función vamos a graficar la función logística en el intervalo [-5, 5]
Step5: Una vez establecida la función logística, vamos a implementar la función de costo sin regularizar para la regresión logística, la cual está dada por
Step7: De la misma manera, para poder implementar las funciones de aprendizaje, vamos a implementar el gradiente de la función de costo. El gradiente de la función de costo respecto a $\theta$ es (como lo vimos en clase) el siguiente
Step8: 2. Descenso de gradiente y el método BGFS para regresión logística
Step10: Vistos los datos un clasificador lineal podría ser una buena solución.
Step11: Para probar la función de aprendizaje, vamos a aplicarla a nuestro problema de admisión. Primero recuerda que tienes que hacer una exploración para encontrar el mejor valor de alpha. Así que utiliza el código de abajo para ajustar alpha
Step12: Una vez encontrada la mejor $\alpha$, entonces podemos calcular $\theta$ (esto va a tardar bastante), recuerda que el costo final debe de ser lo más cercano a 0 posible, así que agrega cuantas iteraciones sean necesarias
Step14: Es interesante ver como el descenso de gradiente no es eficiente en este tipo de problemas, a pesar de ser problemas de optimización convexos.
Step15: ¿Que tan bueno es este clasificador? ¿Es que implementamos bien el método?
Step16: Y para que tengas una idea de lo que debería de salir, anexo una figura obtenida con el código que yo hice
Step17: Bueno, ya vimos que el método del descenso de gradiente, si bien es correcto y fácil de implementar, es bastante ineficiente en cuanto a tiempo (inclusive para un problema bastante simple, por lo que vamos a utilizar la función minimizeque provee scipy la que vamos a utilizar con el algoritmo BFGS (explicado en clase en forma somera). Ejecuta la celda de abajo para revisar la documentación de minimize
Step18: Como puedes ver, el método BFGS es el método por default. Los parámetros que hay que agregar son
Step19: Otra forma de obtener el resltado es utilizando un método tipo simplex que no necesita calcular ni el jacobiano, ni la inversión del hessiano. Que si bien requiere sensiblemente de más iteraciones, para un problema relativamente simple como este es suficiente (sin embargo para problemas de regresión lineal con muchos atributos, o redes neuronales, ya no será suficiente).
Step20: Y ahora veamos la partición del espacio con este método, la cual es ligeramente diferente a la obtenida con el descenso de gradiente
|
14,499 | <ASSISTANT_TASK:>
Python Code:
% matplotlib inline
from __future__ import print_function
import nibabel as nib
from nilearn.image import resample_img
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import os.path
# The following are a progress bar, these are not strictly necessary:
from ipywidgets import FloatProgress
from IPython.display import display
percentiles = range(10)
# unthresholded z-maps from neurosynth:
zmaps = [os.path.join(os.getcwd(), 'ROIs_Mask', fname) for fname in os.listdir(os.path.join(os.getcwd(), 'ROIs_Mask'))
if 'z.nii' in fname]
# individual, binned gradient maps, in a list of lists:
gradmaps = [[os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile), fname)
for fname in os.listdir(os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile)))]
for percentile in percentiles]
# a brain mask file:
brainmaskfile = os.path.join(os.getcwd(), 'ROIs_Mask', 'rbgmask.nii')
def zinsidemask(zmap, mask):
#
zaverage = zmap.dataobj[
np.logical_and(np.not_equal(mask.dataobj, 0), brainmask.dataobj>0)
].mean()
return zaverage
zaverages = np.zeros([len(zmaps), len(gradmaps), len(gradmaps[0])])
# load first gradmap just for resampling
gradmap = nib.load(gradmaps[0][0])
# Load a brainmask
brainmask = nib.load(brainmaskfile)
brainmask = resample_img(brainmask, target_affine=gradmap.affine, target_shape=gradmap.shape)
# Initialise a progress bar:
progbar = FloatProgress(min=0, max=zaverages.size)
display(progbar)
# loop through the network files:
for i1, zmapfile in enumerate(zmaps):
# load the neurosynth activation file:
zmap = nib.load(zmapfile)
# make sure the images are in the same space:
zmap = resample_img(zmap,
target_affine=gradmap.affine,
target_shape=gradmap.shape)
# loop through the bins:
for i2, percentile in enumerate(percentiles):
# loop through the subjects:
for i3, gradmapfile in enumerate(gradmaps[percentile]):
gradmap = nib.load(gradmapfile) # load image
zaverages[i1, i2, i3] = zinsidemask(zmap, gradmap) # calculate av. z-score
progbar.value += 1 # update progressbar (only works in jupyter notebooks)
# np.save(os.path.join(os.getcwd(), 'data', 'average-abs-z-scores'), zaverages)
zaverages = np.load(os.path.join(os.getcwd(), 'data', 'average-z-scores.npy'))
df_phen = pd.read_csv('data' + os.sep + 'SelectedSubjects.csv')
diagnosis = df_phen.loc[:, 'DX_GROUP']
fileids = df_phen.loc[:, 'FILE_ID']
groupvec = np.zeros(len(gradmaps[0]))
for filenum, filename in enumerate(gradmaps[0]):
fileid = os.path.split(filename)[-1][5:-22]
groupvec[filenum] = (diagnosis[fileids.str.contains(fileid)])
print(groupvec.shape)
fig = plt.figure(figsize=(15, 8))
grouplabels = ['Control group', 'Autism group']
for group in np.unique(groupvec):
ylabels = [os.path.split(fname)[-1][0:-23].replace('_', ' ') for fname in zmaps]
# remove duplicates!
includenetworks = []
seen = set()
for string in ylabels:
includenetworks.append(string not in seen)
seen.add(string)
ylabels = [string for index, string in enumerate(ylabels) if includenetworks[index]]
tmp_zaverages = zaverages[includenetworks, :, :]
tmp_zaverages = tmp_zaverages[:, :, groupvec==group]
tmp_zaverages = tmp_zaverages[np.argsort(np.argmax(tmp_zaverages.mean(axis=2), axis=1)), :, :]
# make the figure
plt.subplot(1, 2, group)
cax = plt.imshow(tmp_zaverages.mean(axis=2),
cmap='bwr', interpolation='nearest',
vmin=zaverages.mean(axis=2).min(),
vmax=zaverages.mean(axis=2).max())
ax = plt.gca()
plt.title(grouplabels[int(group-1)])
plt.xlabel('Percentile of principle gradient')
ax.set_xticks(np.arange(0, len(percentiles), 3))
ax.set_xticklabels(['100-90', '70-60', '40-30', '10-0'])
ax.set_yticks(np.arange(0, len(seen), 1))
ax.set_yticklabels(ylabels)
ax.set_yticks(np.arange(-0.5, len(seen), 1), minor=True)
ax.set_xticks(np.arange(-0.5, 10, 1), minor=True)
ax.grid(which='minor', color='w', linewidth=2)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.01, 0.7])
fig.colorbar(cax, cax=cbar_ax, label='Average Z-Score')
#fig.colorbar(cax, cmap='bwr', orientation='horizontal')
plt.savefig('./figures/z-scores-inside-gradient-bins.png')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define the variables for this analysis.
Step2: Next define a function to take the average of an image inside a mask and return it
Step3: This next cell will step through each combination of gradient, subject and network file to calculate the average z-score inside the mask defined by the gradient percentile. This will take a long time to run!
Step4: To save time next time, we'll save the result of this to file
Step5: Extract a list of which group contains which participants.
Step6: Make a plot of the z-scores inside each parcel for each gradient, split by group!
|