Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
100 | <ASSISTANT_TASK:>
Python Code:
After measuring the height of your family, you decide to collect some information
on the house you're living in. The areas of the different parts of your house
are stored in separate variables for now, as shown in the script.
Instructions:
+ Create a list, "areas", that contain the area of:
- hallway(hall), kitchen (kit), living room(liv), bedroom(bed) and bathroom(bath).
- Preserve the order.
+ Print "areas" with the "print()" function.
# area variables (in square meters)
hall = 11.25
kit = 18.0
liv = 20.0
bed = 10.75
bath = 9.50
# Create list areas
areas = [ hall, kit, liv, bed, bath ]
# Print areas
print( areas )
Problem summary:
The code on the right is the start of a solution. For some of the areas,
the name of the corresponding room is already placed in front. Pay attention here!
"bathroom" is a string, while bath is a variable that represents the float 9.50 you specified earlier.
Instruction:
+ Create the "areas" list such that,
- List first contains the name of each room as a string,
and then its area.
- More specifically, add the strings "hallway", "kitchen" and "bedroom"
at the appropriate locations.
+ Print "areas" again; is the printout more informative this time?
# area variables (in square meters)
hall = 11.25
kit = 18.0
liv = 20.0
bed = 10.75
bath = 9.50
# Adapt list areas
areas = ["hallway", hall,"kitchen", kit, "living room", liv, "bedroom", bed, "bathroom", bath]
# Print areas
print(areas)
list1 = [ [ 1, 2, 3 ], [ 4, 5, 7 ] ]
list2 = [ 1 + 2, "a" * 5, 3 ]
# print the list to check any errors
print( list1 )
print( list2 )
Problem summary:
As a data scientist, you'll often be dealing with a lot of data,
and it will make sense to group some of this data.
Instead of creating a flat list containing strings and floats,
representing the names and areas of the rooms in your house,
you can create a list of lists. The script on the right can already give you an idea.
Don't get confused here: "hallway" is a string,
while hall is a variable that represents the float 11.25 you specified earlier.
Instructions:
+ Finish the list of lists so that it also contains the bedroom and bathroom data.
- Make sure you enter these in order!
+ Print out "house";
- Dows this way of structuring your data make more sense?
+ Printt out the type of "house".
- Are you still dealing with a list?
# area variables (in square meters)
hall = 11.25
kit = 18.0
liv = 20.0
bed = 10.75
bath = 9.50
# house information as list of lists
house = [ ["hallway", hall],
["kitchen", kit],
["living room", liv],
["bedroom", bed],
["bathroom", bath] ]
# Print out house
print( house )
# Print out the type of house
print( type( house ) )
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: List data structure
Step3: 2. Creating list with different types - 100xp, Status
Step4: 3. Select the valid list -- 50xp, status
Step6: 4 List of Lists -- 100xp, status
|
101 | <ASSISTANT_TASK:>
Python Code:
!pip install -U tensorflow_transform
!pip install pyarrow
import pkg_resources
import importlib
importlib.reload(pkg_resources)
import os
import tempfile
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import schema_utils
from tfx_bsl.public import tfxio
def preprocessing_fn(inputs):
x = inputs['x']
y = inputs['y']
s = inputs['s']
x_centered = x - tft.mean(x)
y_normalized = tft.scale_to_0_1(y)
s_integerized = tft.compute_and_apply_vocabulary(s)
x_centered_times_y_normalized = x_centered * y_normalized
return {
'x_centered': x_centered,
'y_normalized': y_normalized,
'x_centered_times_y_normalized': x_centered_times_y_normalized,
's_integerized': s_integerized
}
raw_data = [
{'x': 1, 'y': 1, 's': 'hello'},
{'x': 2, 'y': 2, 's': 'world'},
{'x': 3, 'y': 3, 's': 'hello'}
]
raw_data_metadata = dataset_metadata.DatasetMetadata(
schema_utils.schema_from_feature_spec({
'y': tf.io.FixedLenFeature([], tf.float32),
'x': tf.io.FixedLenFeature([], tf.float32),
's': tf.io.FixedLenFeature([], tf.string),
}))
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
transformed_dataset, transform_fn = (
(raw_data, raw_data_metadata) |
tft_beam.AnalyzeAndTransformDataset(preprocessing_fn))
transformed_data, transformed_metadata = transformed_dataset
transformed_data
my_data = (raw_data, raw_data_metadata)
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
transformed_data, transform_fn = (
my_data | tft_beam.AnalyzeAndTransformDataset(preprocessing_fn))
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
transform_fn = my_data | tft_beam.AnalyzeDataset(preprocessing_fn)
transformed_data = (my_data, transform_fn) | tft_beam.TransformDataset()
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import schema_utils
raw_data_metadata = dataset_metadata.DatasetMetadata(
schema_utils.schema_from_feature_spec({
's': tf.io.FixedLenFeature([], tf.string),
'y': tf.io.FixedLenFeature([], tf.float32),
'x': tf.io.FixedLenFeature([], tf.float32),
}))
import pyarrow as pa
raw_data = [
pa.record_batch(
data=[
pa.array([[1], [2], [3]], pa.list_(pa.float32())),
pa.array([[1], [2], [3]], pa.list_(pa.float32())),
pa.array([['hello'], ['world'], ['hello']], pa.list_(pa.binary())),
],
names=['x', 'y', 's'])
]
from google.protobuf import text_format
from tensorflow_metadata.proto.v0 import schema_pb2
tensor_representation = {
'x': text_format.Parse(
dense_tensor { column_name: "col1" shape { dim { size: 2 } } },
schema_pb2.TensorRepresentation())
}
!wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.data
import pandas as pd
train_data_file = "adult.data"
#@title
ORDERED_CSV_COLUMNS = [
'age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'label'
]
CATEGORICAL_FEATURE_KEYS = [
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country',
]
NUMERIC_FEATURE_KEYS = [
'age',
'capital-gain',
'capital-loss',
'hours-per-week',
'education-num',
]
LABEL_KEY = 'label'
RAW_DATA_FEATURE_SPEC = dict(
[(name, tf.io.FixedLenFeature([], tf.string))
for name in CATEGORICAL_FEATURE_KEYS] +
[(name, tf.io.FixedLenFeature([], tf.float32))
for name in NUMERIC_FEATURE_KEYS] +
[(LABEL_KEY, tf.io.FixedLenFeature([], tf.string))]
)
SCHEMA = tft.tf_metadata.dataset_metadata.DatasetMetadata(
tft.tf_metadata.schema_utils.schema_from_feature_spec(RAW_DATA_FEATURE_SPEC)).schema
pd.read_csv(train_data_file, names = ORDERED_CSV_COLUMNS).head()
!pip install -U -q tfx_bsl
from tfx_bsl.public import tfxio
from tfx_bsl.coders.example_coder import RecordBatchToExamples
import apache_beam as beam
pipeline = beam.Pipeline()
csv_tfxio = tfxio.BeamRecordCsvTFXIO(
physical_format='text', column_names=ORDERED_CSV_COLUMNS, schema=SCHEMA)
raw_data = (
pipeline
| 'ReadTrainData' >> beam.io.ReadFromText(
train_data_file, coder=beam.coders.BytesCoder())
| 'FixCommasTrainData' >> beam.Map(
lambda line: line.replace(b', ', b','))
| 'DecodeTrainData' >> csv_tfxio.BeamSource())
raw_data
csv_tfxio = tfxio.CsvTFXIO(train_data_file,
telemetry_descriptors=[], #???
column_names=ORDERED_CSV_COLUMNS,
schema=SCHEMA)
p2 = beam.Pipeline()
raw_data_2 = p2 | 'TFXIORead' >> csv_tfxio.BeamSource()
NUM_OOV_BUCKETS = 1
def preprocessing_fn(inputs):
Preprocess input columns into transformed columns.
# Since we are modifying some features and leaving others unchanged, we
# start by setting `outputs` to a copy of `inputs.
outputs = inputs.copy()
# Scale numeric columns to have range [0, 1].
for key in NUMERIC_FEATURE_KEYS:
outputs[key] = tft.scale_to_0_1(outputs[key])
# For all categorical columns except the label column, we generate a
# vocabulary but do not modify the feature. This vocabulary is instead
# used in the trainer, by means of a feature column, to convert the feature
# from a string to an integer id.
for key in CATEGORICAL_FEATURE_KEYS:
outputs[key] = tft.compute_and_apply_vocabulary(
tf.strings.strip(inputs[key]),
num_oov_buckets=NUM_OOV_BUCKETS,
vocab_filename=key)
# For the label column we provide the mapping from string to index.
with tf.init_scope():
# `init_scope` - Only initialize the table once.
initializer = tf.lookup.KeyValueTensorInitializer(
keys=['>50K', '<=50K'],
values=tf.cast(tf.range(2), tf.int64),
key_dtype=tf.string,
value_dtype=tf.int64)
table = tf.lookup.StaticHashTable(initializer, default_value=-1)
outputs[LABEL_KEY] = table.lookup(outputs[LABEL_KEY])
return outputs
raw_dataset = (raw_data, csv_tfxio.TensorAdapterConfig())
working_dir = tempfile.mkdtemp()
with tft_beam.Context(temp_dir=working_dir):
transformed_dataset, transform_fn = (
raw_dataset | tft_beam.AnalyzeAndTransformDataset(
preprocessing_fn, output_record_batches=True))
output_dir = tempfile.mkdtemp()
transformed_data, _ = transformed_dataset
_ = (
transformed_data
| 'EncodeTrainData' >>
beam.FlatMapTuple(lambda batch, _: RecordBatchToExamples(batch))
| 'WriteTrainData' >> beam.io.WriteToTFRecord(
os.path.join(output_dir , 'transformed.tfrecord')))
_ = (
transform_fn
| 'WriteTransformFn' >> tft_beam.WriteTransformFn(output_dir))
result = pipeline.run().wait_until_finish()
!ls {output_dir}
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define a preprocessing function
Step2: Here, x, y and s are Tensors that represent input features. The first
Step3: The transformed_data content is shown below and contains the transformed
Step4: Both raw_data and transformed_data are datasets. The next two sections show
Step5: transform_fn is a pure function that represents an operation that is applied
Step6: The Schema proto contains the information needed to parse the
Step8: Similar to the dataset_metadata.DatasetMetadata instance that accompanies the "instance dict" format, a tfxio.TensorAdapterConfig
Step9: Means that inputs['x'] in preprocessing_fn should be a dense tf.Tensor,
Step10: There's some configuration code hidden in the cell below.
Step11: The columns of the dataset are either categorical or numeric. This dataset
Step12: Note that we had to do some additional fix-ups after the CSV lines are read
Step14: Preprocessing for this dataset is similar to the previous example,
Step15: One difference from the previous example is the label column manually specifies
Step16: In addition to the training data, transform_fn is also written out with the
Step17: Run the entire Beam pipeline with pipeline.run().wait_until_finish(). Up until this point, the Beam pipeline represents a deferred, distributed computation. It provides instructions for what will be done, but the instructions have not been executed. This final call executes the specified pipeline.
Step18: After running the pipeline the output directory contains two artifacts.
|
102 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
from dtw import dtw
x = array([0, 0, 1, 1, 2, 4, 2, 1, 2, 0]).reshape(-1, 1)
y = array([1, 1, 1, 2, 2, 2, 2, 3, 2, 0]).reshape(-1, 1)
plot(x)
plot(y)
dist, cost, acc, path = dtw(x, y, dist=lambda x, y: norm(x - y, ord=1))
print 'Minimum distance found:', dist
imshow(acc.T, origin='lower', cmap=cm.gray, interpolation='nearest')
plot(path[0], path[1], 'w')
xlim((-0.5, acc.shape[0]-0.5))
ylim((-0.5, acc.shape[1]-0.5))
def my_custom_norm(x, y):
return (x * x) + (y * y)
dist, cost, acc, path = dtw(x, y, dist=my_custom_norm)
from numpy.linalg import norm
dist, cost, acc, path = dtw(x, y, dist=norm)
x = range(10)
y = [0] * 5 + x
x = array(x).reshape(-1, 1)
y = array(y).reshape(-1, 1)
dist, cost, acc, path = dtw(x, y, dist=lambda x, y: norm(x - y, ord=1))
imshow(acc.T, origin='lower', cmap=cm.gray, interpolation='nearest')
plot(path[0], path[1], 'w')
xlim((-0.5, acc.shape[0]-0.5))
ylim((-0.5, acc.shape[1]-0.5))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dtw module contains a single function named dtw as well.
Step2: Let's define two sequences
Step3: Compute DTW
Step4: You can plot the accumulated cost matrix and the "shortest" wrap path.
Step5: Using another distance
Step6: Obviously you can also directly use those defined in numpy.
Step7: Using subsequences
|
103 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import advancedfilters as af
def buildTestVolume(size,sigma) :
vol = np.zeros([size,size,size])
margin = size // 4
vol[margin:-margin,margin:-margin,margin:-margin]=1
vol = vol + np.random.normal(0,1,size=vol.shape)*sigma
return vol.astype('float32')
vol = buildTestVolume(40,0.5)
fig,ax = plt.subplots(1,2,figsize=[12,5])
ax[0].imshow(vol[20]);
ax[1].hist(vol.ravel(),bins=256);
fvol = vol.copy()
iss=af.ISSfilter3D()
iss.setInitialImageType(af.InitialImageOriginal)
iss.setRegularizationType(af.RegularizationTV2)
fvol = vol.copy()
# Normalize
m = vol.mean()
s = vol.std()
fvol = (fvol-m)/s
# Run the filter inplace
iss.process(fvol,tau=0.125,plambda=1,palpha=0.25,N=10)
# Rescale
fvol = s*fvol + m
error = iss.errors()
fig,ax=plt.subplots(2,3,figsize=[15,10])
ax = ax.ravel()
ax[0].imshow(vol[20],cmap='gray');
ax[1].imshow(fvol[20],cmap='gray')
ax[2].axis('off')
ax[3].imshow(vol[20]-fvol[20]); ax[3].set_title('Original-Filtered')
ax[4].plot(error); ax[4].set_title('MSE of the iterations');
ax[5].hist(vol.ravel(), bins=256,label='Original');
ax[5].hist(fvol.ravel(), bins=256,label='Filtered');
ax[5].legend();
ax[5].set_title('Histograms of the images');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make some test data
Step2: The cube has the dimensions 40x40x40 voxels and the StdDev of the noise is 0.5.
Step3: The filter operates inplace, therefore we make a deep copy of the image to be able to compare the performance of the filter.
|
104 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9d-l78', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 13.3. External Mixture
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Step60: 14.2. Shortwave Bands
Step61: 14.3. Longwave Bands
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Step63: 15.2. Twomey
Step64: 15.3. Twomey Minimum Ccn
Step65: 15.4. Drizzle
Step66: 15.5. Cloud Lifetime
Step67: 15.6. Longwave Bands
Step68: 16. Model
Step69: 16.2. Processes
Step70: 16.3. Coupling
Step71: 16.4. Gas Phase Precursors
Step72: 16.5. Scheme Type
Step73: 16.6. Bulk Scheme Species
|
105 | <ASSISTANT_TASK:>
Python Code:
import rebound
import reboundx
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def makesim():
sim = rebound.Simulation()
sim.G = 4*np.pi**2
sim.add(m=1.)
sim.add(m=1.e-4, a=1.)
sim.add(m=1.e-4, a=1.5)
sim.move_to_com()
return sim
sim = makesim()
rebx = reboundx.Extras(sim)
kep = rebx.load_operator("kepler")
inter = rebx.load_operator("interaction")
sim.integrator="none"
rebx.add_operator(kep, dtfraction=1., timing="pre")
rebx.add_operator(inter, dtfraction=1., timing="pre")
sim2 = makesim()
rebx2 = reboundx.Extras(sim2)
kep = rebx2.load_operator("kepler")
ias = rebx2.load_operator("ias15")
sim2.integrator="none"
rebx2.add_operator(kep, dtfraction=0.5, timing="pre")
rebx2.add_operator(ias, dtfraction=1, timing="pre")
rebx2.add_operator(kep, dtfraction=-0.5, timing="pre")
dt = 0.0037*sim.particles[1].P
sim.dt = dt
sim2.dt = dt
Nout = 1000
E0 = sim.calculate_energy()
Eerr = np.zeros(Nout)
Eerr2 = np.zeros(Nout)
times = np.linspace(0, 10, Nout)
for i, time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
sim2.integrate(time, exact_finish_time=0)
E = sim.calculate_energy()
E2 = sim2.calculate_energy()
Eerr[i] = np.abs((E-E0)/E0)
Eerr2[i] = np.abs((E2-E0)/E0)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(times, Eerr, '.', label='1st-order Split')
ax.plot(times, Eerr2, '.', label='1st-order Modified IAS')
ax.set_yscale('log')
ax.set_xlabel('Time (Inner Planet Orbits)', fontsize=18)
ax.set_ylabel('Relative Energy Error', fontsize=18)
ax.legend(fontsize=18)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We now consider a first-order Kepler splitting (Wisdom-Holman map)
Step2: We now set sim.integrator to none, so that REBOUND doesn't do anything in addition to the operators that we include, and we add our two operators, specifying the fraction of sim.dt we want each operator to act over (here the full timestep of 1). In this case since we've turned off the REBOUND timestep altogether, it doesn't matter if we add the operator "pre" timestep or "post" timestep, so we could have left it out.
Step3: One can show (see Tamayo et al. 2019) that to leading order this scheme is equivalent to one where one integrates the motion exactly with IAS15, but one includes a half step backward in time before the IAS step, and a half step forward in time after, i.e.
Step4: We now integrate the orbits, track the energy errors and plot them
|
106 | <ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname)
raw.set_eeg_reference() # set EEG average reference
order = np.arange(raw.info['nchan'])
order[9] = 312 # We exchange the plotting order of two channels
order[312] = 9 # to show the trigger channel as the 10th channel.
raw.plot(n_channels=10, order=order, block=True)
events = mne.find_events(raw)
print('Found %s events, first five:' % len(events))
print(events[:5])
# Plot the events to get an idea of the paradigm
# Specify colors and an event_id dictionary for the legend.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4,
'smiley': 5, 'button': 32}
color = {1: 'green', 2: 'yellow', 3: 'red', 4: 'c', 5: 'black', 32: 'blue'}
mne.viz.plot_events(events, raw.info['sfreq'], raw.first_samp, color=color,
event_id=event_id)
raw.plot(events=events, n_channels=10, order=order)
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
# Only pick MEG and EOG channels.
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True)
baseline = (None, 0.0)
reject = {'mag': 4e-12, 'eog': 200e-6}
epochs = mne.Epochs(raw, events=events, event_id=event_id, tmin=tmin,
tmax=tmax, reject=reject, picks=picks)
epochs.plot(block=True)
epochs.plot_drop_log()
picks = mne.pick_types(epochs.info, meg=True, eog=True)
evoked_left = epochs['Auditory/Left'].average(picks=picks)
evoked_right = epochs['Auditory/Right'].average(picks=picks)
epochs_left = epochs['Left']
# ... or to select a very specific subset. This is the same as above:
evoked_left = epochs['Left/Auditory'].average(picks=picks)
evoked_left.plot()
evoked_right.plot()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In MNE, epochs refers to a collection of single trials or short segments
Step2: To create time locked epochs, we first need a set of events that contain the
Step3: Notice channel STI 014 at the bottom. It is the trigger channel that
Step4: The event list contains three columns. The first column corresponds to
Step5: In this tutorial we are only interested in triggers 1, 2, 3 and 4. These
Step6: Now we have everything we need to construct the epochs. To get some
Step7: Let's plot the epochs to see the results. The number at the top refers to the
Step8: To see why the epochs were rejected, we can plot the drop log.
Step9: To get the evoked response you can simply do epochs.average(). It
Step10: Notice we have used forward slashes ('/') to separate the factors of the
Step11: Finally, let's plot the evoked responses.
|
107 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences=source_text.split('\n')
source_id_text=[]
for s in source_sentences:
sentence_text_words=s.split()
sentence_id_text=[source_vocab_to_int[i] for i in sentence_text_words]
source_id_text.append(sentence_id_text)
target_sentences=target_text.split('\n')
target_id_text=[]
for s in target_sentences:
sentence_text_words=s.split()
sentence_id_text=[target_vocab_to_int[i] for i in sentence_text_words]
sentence_id_text.append(target_vocab_to_int['<EOS>'])
target_id_text.append(sentence_id_text)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
for i in range(10):
print(source_int_text[i])
print(target_int_text[i])
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets_ = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
return (input_, targets_, learning_rate, keep_probability)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs,dtype=tf.float32)
return final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
# Add dropout to the cell
drop = tf.nn.dropout(train_logits, keep_prob)
return drop
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings,start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
# Add dropout to the cell
drop = tf.nn.dropout(inference_logits, keep_prob)
return drop
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Decoder RNNs
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
# Decoder train
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# Training Decoder
training_logit=decoding_layer_train(encoder_state, cell, dec_embed_input, sequence_length, decoding_scope,output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
# Inference Decoder
inference_logits=decoding_layer_infer(encoder_state, cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return training_logit, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
#Apply embedding to the input data for the encoder.
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
#Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_state=encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
#Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input=process_decoding_input(target_data, target_vocab_to_int, batch_size)
#Apply embedding to the target data for the decoder.
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
#Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
training_logit, inference_logits=decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logit,inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
ids=[]
for w in sentence.split():
if(w in vocab_to_int):
ids.append(vocab_to_int[w])
else:
ids.append(vocab_to_int['<UNK>'])
return ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
108 | <ASSISTANT_TASK:>
Python Code:
# Import networkx and also matplotlib.pyplot for visualization
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
# Create an empty undirected graph
G = nx.Graph()
# Add some nodes and edges. Adding edges aslo adds nodes if they don't already exist.
G.add_node('Janos')
G.add_nodes_from(['Sophie', 'Otto'])
G.add_edge('Janos', 'Sophie')
G.add_edges_from([('Janos', 'Otto'), ('Sophie', 'Otto')])
nx.draw_spectral(G, with_labels=True, node_size=3000)
plt.show()
# Create an empty directed graph
G = nx.DiGraph()
# Add some nodes and edges. Adding edges aslo adds nodes if they don't already exist.
G.add_node('Janos')
G.add_nodes_from(['Sophie', 'Otto'])
G.add_edge('Janos', 'Sophie')
G.add_edges_from([('Janos', 'Otto'), ('Sophie', 'Otto')])
nx.draw_spectral(G, with_labels=True, node_size=3000)
plt.show()
import numpy as np
G.add_node(np.mean)
file = open('abc.txt', 'w')
G.add_node(file)
print(G.nodes())
!head addHealth81.txt
D = nx.read_weighted_edgelist('addHealth81.txt', create_using=nx.DiGraph())
len(D.nodes()), len(D.edges())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Undirected Graphs
Step2: Directed Graphs
Step3: What can nodes be?
Step4: Reading in Data
|
109 | <ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pylab as plt
import numpy as np
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
!ls -lh data
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras import applications
# dimensions of our images.
img_width, img_height = 150, 150
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 16
# build the VGG16 network
model = applications.VGG16(include_top=False, weights='imagenet')
train_data = np.load(open('bottleneck_features_train.npy', 'rb'))
train_data.shape[1:]
# first half of data is dog (0), second half is cat (1)
train_labels = np.array(
[0] * (nb_train_samples // 2) + [1] * (nb_train_samples // 2))
# same for validation
validation_data = np.load(open('bottleneck_features_validation.npy', 'rb'))
validation_labels = np.array(
[0] * (nb_validation_samples // 2) + [1] * (nb_validation_samples // 2))
model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(train_data, train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(validation_data, validation_labels))
top_model_weights_path = 'bottleneck_fc_model.h5'
model.save_weights(top_model_weights_path)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This script goes along the blog post
Step2: Next step is to use those saved bottleneck feature activations and train our own, very simple fc layer
|
110 | <ASSISTANT_TASK:>
Python Code:
import os, pdb
import fitsio
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
from astropy.table import vstack, Table
from astrometry.libkd.spherematch import match_radec
import seaborn as sns
sns.set(context='talk', style='ticks', font_scale=1.4)
%matplotlib inline
dr8dir = '/global/project/projectdirs/cosmo/data/legacysurvey/dr8/'
outdir = '/global/project/projectdirs/desi/users/ioannis/lslga-from-gaia'
def read_gaia_psf_sdss(clobber=False):
outfile = os.path.join(outdir, 'dr8-gaia-psf-sdss.fits')
if os.path.isfile(outfile) and not clobber:
out = Table.read(outfile)
print('Read {} galaxies from {}'.format(len(out), outfile))
else:
sdss = fitsio.read('/global/cfs/cdirs/cosmo/work/sdss/cats/specObj-dr14.fits')
out = []
for region in ('north', 'south'):
print('Working on {}'.format(region))
ext = fitsio.read(os.path.join(dr8dir, region, 'external',
'survey-dr8-{}-specObj-dr14.fits'.format(region)))
keep = np.where((ext['GAIA_PHOT_G_MEAN_MAG'] > 0) *
(ext['GAIA_PHOT_G_MEAN_MAG'] < 18) *
(ext['GAIA_ASTROMETRIC_EXCESS_NOISE'] < 10.**0.5) *
(ext['FLUX_W1'] > 0) *
(ext['FLUX_R'] > 0) *
((sdss['PRIMTARGET'] & 2**6) != 0) *
(sdss['Z'] > 0.001) * (sdss['Z'] < 1) *
(sdss['ZWARNING'] == 0))[0]
if len(keep) > 0:
out.append(Table(ext[keep]))
out = vstack(out)
out.write(outfile, overwrite=True)
return out
%time specz = read_gaia_psf_sdss(clobber=False)
#m1, m2, _ = match_radec(specz['RA'], specz['DEC'], specz['RA'], specz['DEC'], 1/3600, nearest=False)
#print(len(m1), len(specz))
#ext = fitsio.read(os.path.join(dr8dir, 'north', 'external', 'survey-dr8-north-specObj-dr14.fits'))
#m1, m2, _ = match_radec(ext['RA'], ext['DEC'], ext['RA'], ext['DEC'], 1/3600, nearest=False)
#print(len(m1), len(ext))
def read_gaia_psf(clobber=False):
outfile = os.path.join(outdir, 'dr8-gaia-psf.fits')
if os.path.isfile(outfile) and not clobber:
out = Table(fitsio.read(outfile))
print('Read {} objects from {}'.format(len(out), outfile))
else:
out = []
for region in ['north', 'south']:
print('Working on {}'.format(region))
sweepdir = os.path.join(outdir, 'sweep-{}-gaia'.format(region))
catfile = glob(os.path.join(sweepdir, 'sweep*.fits'))
for ii, ff in enumerate(catfile):
if ii % 50 == 0:
print('{} / {}'.format(ii, len(catfile)))
cc = fitsio.read(ff)
if len(cc) > 0:
out.append(Table(cc))
out = vstack(out)
print('Writing {} objects to {}'.format(len(out), outfile))
out.write(outfile, overwrite=True)
return out
%time cat = read_gaia_psf(clobber=True)
def getmags(cat):
gmag = cat['GAIA_PHOT_G_MEAN_MAG']
bp = cat['GAIA_PHOT_BP_MEAN_MAG']
rp = cat['GAIA_PHOT_RP_MEAN_MAG']
rmag = 22.5-2.5*np.log10(cat['FLUX_R'])
Wmag = 22.5-2.5*np.log10(cat['FLUX_W1'])
resid = cat['APFLUX_RESID_R'][:, 5]/10**(-0.4*(gmag-22.5))
#resid = cat['APFLUX_RESID_R'][:, 7]/cat['FLUX_R']
chi2 = cat['RCHISQ_R']
return gmag-Wmag, bp-rp, resid, chi2
gW, bprp, resid, chi2 = getmags(cat)
sgW, sbprp, sresid, schi2 = getmags(specz)
xlim, ylim = (-0.3, 4), (0, 3.5)
# north cuts
#x0, x1, x2, x3 = (0.2, 0.2, 0.55, 5.0)
#y0, y1, y2, y3 = ( _, 1.7, 1.0, 1.0)
# north/south
x0, x1, x2, x3 = (0.25, 0.25, 0.55, 5.0)
y0, y1, y2, y3 = ( _, 1.7, 1.2, 1.2)
c1 = np.polyfit([x1, x2], [y1, y2], 1)
c2 = np.polyfit([x2, x3], [y2, y3], 1)
print('Cut 1: x>{:.2f}'.format(x0))
print('Cut 2: y>{:.4f}x + {:.4f}'.format(c1[0], c1[1]))
print('Cut 3: y>{:.2f}'.format(c2[0]))
#print(c1, c2)
J = np.where((resid > x0) * (gW > np.polyval(c1, resid)) * (gW > np.polyval(c2, resid)))[0]
I = np.where((sresid > x0) * (sgW > np.polyval(c1, sresid)) * (sgW > np.polyval(c2, sresid)))[0]
print('Selected SDSS-specz galaxies: N={}/{} ({:.4f}%)'.format(len(I), len(specz), 100*len(I)/len(specz)))
print('Candidate LSLGA-Gaia galaxies: N={}/{} ({:.4f}%)'.format(len(J), len(cat), 100*len(J)/len(cat)))
#print(len(J), len(cat), len(J)/len(cat))
fig, ax = plt.subplots(figsize=(12, 10))
ax.hexbin(resid, gW, mincnt=3, cmap='Greys_r',
extent=np.hstack((xlim, ylim)))
ax.scatter(resid[J], gW[J], s=10, marker='s', alpha=0.7,
label='Candidate galaxies (N={})'.format(len(J)))
ax.scatter(sresid, sgW, s=15, marker='o', alpha=0.7,
label='SDSS-specz (N={})'.format(len(specz)))
ax.plot([x0, x0], [y1, ylim[1]], color='red', lw=2)
ax.plot([x1, x2], [y1, y2], color='red', lw=2)
ax.plot([x2, x3], [y2, y3], color='red', lw=2)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel(r'Residual Aperture $r$ Flux (7" diameter) / Gaia $G$ flux')
ax.set_ylabel(r'$G - W_{1}$ (mag)')
#_ = ax.set_title(r'$0 < G < 18$ & AEN $< 10^{0.5}$')
ax.text(0.93, 0.9, r'$0 < G < 18$ & AEN $< 10^{0.5}$',
ha='right', va='bottom', transform=ax.transAxes,
fontsize=20)
hh, ll = ax.get_legend_handles_labels()
#print(ll)
ax.legend(hh[1:], ll[1:], loc='lower right', fontsize=18)
fig.subplots_adjust(left=0.13, bottom=0.12, top=0.95)
pngfile = os.path.join(outdir, 'dr8-gaia-psf-galaxies.png')
print('Writing {}'.format(pngfile))
fig.savefig(pngfile)
K = []
for brickid in set(specz['BRICKID']):
W = np.where(brickid == specz['BRICKID'])[0]
for ww in W:
K.append(np.where((cat['BRICKID'] == brickid) * (cat['OBJID'] == specz['OBJID'][ww]))[0])
K = np.unique(np.hstack(K))
print('Matched {} unique galaxies from the parent SDSS-Gaia sample.'.format(len(K)))
Jfinal = np.unique(np.hstack((J, K)))
print('Original sample = {}, final sample = {}'.format(len(J), len(Jfinal)))
#m1, m2, _ = match_radec(cat['RA'][J], cat['DEC'][J], specz['RA'], specz['DEC'], 1/3600.0, nearest=True)
#missed = np.delete(np.arange(len(specz)), m2)
#print('Selected SDSS galaxies {}/{}, missing {}.'.format(len(m2), len(specz), len(missed)))
#k1, k2, _ = match_radec(cat['RA'], cat['DEC'], specz['RA'][missed], specz['DEC'][missed],
# 1/3600.0, nearest=True)
#print('Found {}/{} of the missed SDSS galaxies.'.format(len(k2), len(missed)))
# check
#m1, m2, _ = match_radec(cat['RA'][Jfinal], cat['DEC'][Jfinal], specz['RA'], specz['DEC'], 2/3600.0, nearest=True)
#print(len(m2), len(specz))
#missed = np.delete(np.arange(len(specz)), m2)
#specz[missed]
#assert(len(m2)==len(specz))
for ra, dec in zip(cat['RA'][Jfinal[:500]], cat['DEC'][Jfinal[:500]]):
if dec < 30:
print(ra, dec)
# We get this broadline QSO now!!
# http://legacysurvey.org/viewer-dev?ra=178.6654&dec=34.8714&layer=dr8-resid&zoom=14&lslga&masks-dr9&spectra
#match_radec(cat['RA'][Jfinal], cat['DEC'][Jfinal], 178.6654, 34.8714, 1/3600, nearest=True)
outfile = os.path.join(outdir, 'dr8-gaia-psf-galaxies.fits')
print('Writing {} galaxies to {}'.format(len(Jfinal), outfile))
cat[Jfinal].write(outfile, overwrite=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the SDSS training sample.
Step2: Next, assemble the full catalog of forced-PSF Gaia sources from DR8.
Step3: Make some plots and develop the selection.
Step4: Might as well add all the SDSS galaxies to the output sample, irrespective of where they lie.
Step5: Write out.
|
111 | <ASSISTANT_TASK:>
Python Code:
m = 50000 # timesteps
dt = 1/ 250.0 # update loop at 250Hz
t = np.arange(m) * dt
freq = 0.05 # Hz
amplitude = 5.0 # meter
alt_true = 405 + amplitude * np.cos(2 * np.pi * freq * t)
height_true = 6 + amplitude * np.cos(2 * np.pi * freq * t)
vel_true = - amplitude * (2 * np.pi * freq) * np.sin(2 * np.pi * freq * t)
acc_true = - amplitude * (2 * np.pi * freq)**2 * np.cos(2 * np.pi * freq * t)
plt.plot(t, height_true)
plt.plot(t, vel_true)
plt.plot(t, acc_true)
plt.legend(['elevation', 'velocity', 'acceleration'], loc='best')
plt.xlabel('time')
sonar_sampling_period = 1 / 10.0 # sonar reading at 10Hz
# Sonar noise
sigma_sonar_true = 0.05 # in meters
meas_sonar = height_true[::(sonar_sampling_period/dt)] + sigma_sonar_true * np.random.randn(m // (sonar_sampling_period/dt))
t_meas_sonar = t[::(sonar_sampling_period/dt)]
plt.plot(t_meas_sonar, meas_sonar, 'or')
plt.plot(t, height_true)
plt.legend(['Sonar measure', 'Elevation (true)'])
plt.title("Sonar measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
baro_sampling_period = 1 / 10.0 # baro reading at 10Hz
# Baro noise
sigma_baro_true = 2.0 # in meters
# Baro bias
baro_bias = 20
meas_baro = baro_bias + alt_true[::(baro_sampling_period/dt)] + sigma_baro_true * np.random.randn(m // (baro_sampling_period/dt))
t_meas_baro = t[::(baro_sampling_period/dt)]
plt.plot(t_meas_baro, meas_baro, 'or')
plt.plot(t, alt_true)
plt.title("Baro measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
gps_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gps_true = 5.0 # in meters
meas_gps = alt_true[::(gps_sampling_period/dt)] + sigma_gps_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gps, 'or')
plt.plot(t, alt_true)
plt.title("GPS measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
gpsvel_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gpsvel_true = 10.0 # in meters/s
meas_gpsvel = vel_true[::(gps_sampling_period/dt)] + sigma_gpsvel_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gpsvel, 'or')
plt.plot(t, vel_true)
plt.title("GPS velocity measurement")
plt.xlabel('time (s)')
plt.ylabel('vel (m/s)')
sigma_acc_true = 0.2 # in m.s^-2
acc_bias = 1.5
meas_acc = acc_true + sigma_acc_true * np.random.randn(m) + acc_bias
plt.plot(t, meas_acc, '.')
plt.plot(t, acc_true)
plt.title("Accelerometer measurement")
plt.xlabel('time (s)')
plt.ylabel('acc ($m.s^{-2}$)')
x = np.matrix([0.0, 0.0, 0.0, 0.0, 0.0]).T
print(x, x.shape)
P = np.diag([100.0, 100.0, 100.0, 100.0, 100.0])
print(P, P.shape)
dt = 1 / 250.0 # Time step between filter steps (update loop at 250Hz)
A = np.matrix([[1.0, 0.0, dt, 0.5*dt**2, 0.0],
[0.0, 1.0, dt, 0.5*dt**2, 0.0],
[0.0, 0.0, 1.0, dt, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
print(A, A.shape)
B = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt ],
[0.0],
[0.0]])
print(B, B.shape)
H_sonar = np.matrix([[0.0, 1.0, 0.0, 0.0, 0.0]])
print(H_sonar, H_sonar.shape)
H_baro = np.matrix([[1.0, 0.0, 0.0, 0.0, 1.0]])
print(H_baro, H_baro.shape)
H_gps = np.matrix([[1.0, 0.0, 0.0, 0.0, 0.0]])
print(H_gps, H_gps.shape)
H_gpsvel = np.matrix([[0.0, 0.0, 1.0, 0.0, 0.0]])
print(H_gpsvel, H_gpsvel.shape)
# sonar
sigma_sonar = sigma_sonar_true # sonar noise
R_sonar = np.matrix([[sigma_sonar**2]])
print(R_sonar, R_sonar.shape)
# baro
sigma_baro = sigma_baro_true # sonar noise
R_baro = np.matrix([[sigma_baro**2]])
print(R_baro, R_baro.shape)
# gps
sigma_gps = sigma_gps_true # sonar noise
R_gps = np.matrix([[sigma_gps**2]])
print(R_gps, R_gps.shape)
# gpsvel
sigma_gpsvel = sigma_gpsvel_true # sonar noise
R_gpsvel = np.matrix([[sigma_gpsvel**2]])
print(R_gpsvel, R_gpsvel.shape)
from sympy import Symbol, Matrix, latex
from sympy.interactive import printing
import sympy
printing.init_printing()
dts = Symbol('\Delta t')
s1 = Symbol('\sigma_1') # drift of accelerometer bias
s2 = Symbol('\sigma_2') # drift of barometer bias
Q = sympy.zeros(5)
Qs = Matrix([[0.5*dts**2], [0.5*dts**2], [dts], [1.0]])
Q[:4, :4] = Qs*Qs.T*s1**2
Q[4, 4] = s2**2
Q
sigma_acc_drift = 0.0001
sigma_baro_drift = 0.0001
G = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt],
[1.0]])
Q = np.zeros([5, 5])
Q[:4, :4] = G*G.T*sigma_acc_drift**2
Q[4, 4] = sigma_baro_drift**2
print(Q, Q.shape)
I = np.eye(5)
print(I, I.shape)
u = meas_acc
print(u, u.shape)
# Re init state
# State
x[0] = 0.0
x[1] = 0.0
x[2] = 0.0
x[3] = 0.0
x[4] = 0.0
# Estimate covariance
P[0,0] = 1000.0
P[1,1] = 100.0
P[2,2] = 100.0
P[3,3] = 100.0
P[4,4] = 100.0
# Preallocation for Plotting
# estimate
zt = []
ht = []
dzt= []
zetat=[]
etat = []
# covariance
Pz = []
Ph = []
Pdz= []
Pzeta=[]
Peta=[]
# sonar off/on
sonar_off = 10000
sonar_on = 40000
for filterstep in range(m):
# ========================
# Time Update (Prediction)
# ========================
# Project the state ahead
x = A*x + B*u[filterstep]
# Project the error covariance ahead
P = A*P*A.T + Q
# ===============================
# Measurement Update (Correction)
# ===============================
# Sonar (only at the beginning, ex take off)
if filterstep%25 == 0 and (filterstep <sonar_off or filterstep>sonar_on):
# Compute the Kalman Gain
S_sonar = H_sonar*P*H_sonar.T + R_sonar
K_sonar = (P*H_sonar.T) * np.linalg.pinv(S_sonar)
# Update the estimate via z
Z_sonar = meas_sonar[filterstep//25]
y_sonar = Z_sonar - (H_sonar*x) # Innovation or Residual
x = x + (K_sonar*y_sonar)
# Update the error covariance
P = (I - (K_sonar*H_sonar))*P
# Baro
if filterstep%25 == 0:
# Compute the Kalman Gain
S_baro = H_baro*P*H_baro.T + R_baro
K_baro = (P*H_baro.T) * np.linalg.pinv(S_baro)
# Update the estimate via z
Z_baro = meas_baro[filterstep//25]
y_baro = Z_baro - (H_baro*x) # Innovation or Residual
x = x + (K_baro*y_baro)
# Update the error covariance
P = (I - (K_baro*H_baro))*P
# GPS
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gps = H_gps*P*H_gps.T + R_gps
K_gps = (P*H_gps.T) * np.linalg.pinv(S_gps)
# Update the estimate via z
Z_gps = meas_gps[filterstep//250]
y_gps = Z_gps - (H_gps*x) # Innovation or Residual
x = x + (K_gps*y_gps)
# Update the error covariance
P = (I - (K_gps*H_gps))*P
# GPSvel
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gpsvel = H_gpsvel*P*H_gpsvel.T + R_gpsvel
K_gpsvel = (P*H_gpsvel.T) * np.linalg.pinv(S_gpsvel)
# Update the estimate via z
Z_gpsvel = meas_gpsvel[filterstep//250]
y_gpsvel = Z_gpsvel - (H_gpsvel*x) # Innovation or Residual
x = x + (K_gpsvel*y_gpsvel)
# Update the error covariance
P = (I - (K_gpsvel*H_gpsvel))*P
# ========================
# Save states for Plotting
# ========================
zt.append(float(x[0]))
ht.append(float(x[1]))
dzt.append(float(x[2]))
zetat.append(float(x[3]))
etat.append(float(x[4]))
Pz.append(float(P[0,0]))
Ph.append(float(P[1,1]))
Pdz.append(float(P[2,2]))
Pzeta.append(float(P[3,3]))
Peta.append(float(P[4,4]))
plt.figure(figsize=(17,15))
plt.subplot(321)
plt.plot(t, zt, color='b')
plt.fill_between(t, np.array(zt) - 10* np.array(Pz), np.array(zt) + 10*np.array(Pz), alpha=0.2, color='b')
plt.plot(t, alt_true, 'g')
plt.plot(t_meas_baro, meas_baro, '.r')
plt.plot(t_meas_gps, meas_gps, 'ok')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([405 - 10 * amplitude, 405 + 5 * amplitude])
plt.legend(['estimate', 'true altitude', 'baro reading', 'gps reading', 'sonar switched off/on'], loc='lower right')
plt.title('Altitude')
plt.subplot(322)
plt.plot(t, ht, color='b')
plt.fill_between(t, np.array(ht) - 10* np.array(Ph), np.array(ht) + 10*np.array(Ph), alpha=0.2, color='b')
plt.plot(t, height_true, 'g')
plt.plot(t_meas_sonar, meas_sonar, '.r')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([5 - 2 * amplitude, 5 + 1.5 * amplitude])
#plt.ylim([5 - 1 * amplitude, 5 + 1 * amplitude])
plt.legend(['estimate', 'true height above ground', 'sonar reading', 'sonar switched off/on'], loc='lower right')
plt.title('Height')
plt.subplot(323)
plt.plot(t, dzt, color='b')
plt.fill_between(t, np.array(dzt) - 10* np.array(Pdz), np.array(dzt) + 10*np.array(Pdz), alpha=0.2, color='b')
plt.plot(t, vel_true, 'g')
plt.plot(t_meas_gps, meas_gpsvel, 'ok')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([0 - 10.0 * amplitude, + 10.0 * amplitude])
plt.legend(['estimate', 'true velocity', 'gps_vel reading', 'sonar switched off/on'], loc='lower right')
plt.title('Velocity')
plt.subplot(324)
plt.plot(t, zetat, color='b')
plt.fill_between(t, np.array(zetat) - 10* np.array(Pzeta), np.array(zetat) + 10*np.array(Pzeta), alpha=0.2, color='b')
plt.plot(t, -acc_bias * np.ones_like(t), 'g')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
plt.ylim([-acc_bias-0.2, -acc_bias+0.2])
# plt.ylim([0 - 2.0 * amplitude, + 2.0 * amplitude])
plt.legend(['estimate', 'true bias', 'sonar switched off/on'])
plt.title('Acc bias')
plt.subplot(325)
plt.plot(t, etat, color='b')
plt.fill_between(t, np.array(etat) - 10* np.array(Peta), np.array(etat) + 10*np.array(Peta), alpha=0.2, color='b')
plt.plot(t, baro_bias * np.ones_like(t), 'g')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
plt.ylim([baro_bias-10.0, baro_bias+10.0])
# plt.ylim([0 - 2.0 * amplitude, + 2.0 * amplitude])
plt.legend(['estimate', 'true bias', 'sonar switched off/on'])
plt.title('Baro bias')
plt.subplot(326)
plt.plot(t, Pz)
plt.plot(t, Ph)
plt.plot(t, Pdz)
plt.ylim([0, 1.0])
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
plt.legend(['Altitude', 'Height', 'Velocity', 'sonar switched off/on'])
plt.title('Incertitudes')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: II) MEASUREMENTS
Step2: Baro
Step3: GPS
Step4: GPS velocity
Step5: Acceleration
Step6: III) PROBLEM FORMULATION
Step7: Initial uncertainty $P_0$
Step8: Dynamic matrix $A$
Step9: Disturbance Control Matrix $B$
Step10: Measurement Matrix $H$
Step11: Measurement noise covariance $R$
Step12: Process noise covariance $Q$
Step13: Identity Matrix
Step14: Input
Step15: V) TEST
Step16: VI) PLOT
|
112 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
import pprint
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
def get_mixer_data(data_size=100_000, random_seed=42):
# We need to fix the random seed
# to make colab runs repeatable.
rng = np.random.RandomState(random_seed)
country = rng.randint(200, size=[data_size, 1]) / 200.
bananas = rng.randint(24, size=[data_size, 1]) / 24.
cookbooks = rng.randint(6, size=[data_size, 1]) / 6.
x = np.concatenate([country, bananas, cookbooks], axis=1)
# # Create 1st-order terms.
y = 0.1 * country + 0.4 * bananas + 0.7 * cookbooks
# Create 2nd-order cross terms.
y += 0.1 * country * bananas + 3.1 * bananas * cookbooks + (
0.1 * cookbooks * cookbooks)
return x, y
x, y = get_mixer_data()
num_train = 90000
train_x = x[:num_train]
train_y = y[:num_train]
eval_x = x[num_train:]
eval_y = y[num_train:]
class Model(tfrs.Model):
def __init__(self, model):
super().__init__()
self._model = model
self._logit_layer = tf.keras.layers.Dense(1)
self.task = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[
tf.keras.metrics.RootMeanSquaredError("RMSE")
]
)
def call(self, x):
x = self._model(x)
return self._logit_layer(x)
def compute_loss(self, features, training=False):
x, labels = features
scores = self(x)
return self.task(
labels=labels,
predictions=scores,
)
crossnet = Model(tfrs.layers.dcn.Cross())
deepnet = Model(
tf.keras.Sequential([
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(128, activation="relu")
])
)
train_data = tf.data.Dataset.from_tensor_slices((train_x, train_y)).batch(1000)
eval_data = tf.data.Dataset.from_tensor_slices((eval_x, eval_y)).batch(1000)
epochs = 100
learning_rate = 0.4
crossnet.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate))
crossnet.fit(train_data, epochs=epochs, verbose=False)
deepnet.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate))
deepnet.fit(train_data, epochs=epochs, verbose=False)
crossnet_result = crossnet.evaluate(eval_data, return_dict=True, verbose=False)
print(f"CrossNet(1 layer) RMSE is {crossnet_result['RMSE']:.4f} "
f"using {crossnet.count_params()} parameters.")
deepnet_result = deepnet.evaluate(eval_data, return_dict=True, verbose=False)
print(f"DeepNet(large) RMSE is {deepnet_result['RMSE']:.4f} "
f"using {deepnet.count_params()} parameters.")
mat = crossnet._model._dense.kernel
features = ["country", "purchased_bananas", "purchased_cookbooks"]
plt.figure(figsize=(9,9))
im = plt.matshow(np.abs(mat.numpy()), cmap=plt.cm.Blues)
ax = plt.gca()
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
cax.tick_params(labelsize=10)
_ = ax.set_xticklabels([''] + features, rotation=45, fontsize=10)
_ = ax.set_yticklabels([''] + features, fontsize=10)
ratings = tfds.load("movie_lens/100k-ratings", split="train")
ratings = ratings.map(lambda x: {
"movie_id": x["movie_id"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
"user_gender": int(x["user_gender"]),
"user_zip_code": x["user_zip_code"],
"user_occupation_text": x["user_occupation_text"],
"bucketized_user_age": int(x["bucketized_user_age"]),
})
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
feature_names = ["movie_id", "user_id", "user_gender", "user_zip_code",
"user_occupation_text", "bucketized_user_age"]
vocabularies = {}
for feature_name in feature_names:
vocab = ratings.batch(1_000_000).map(lambda x: x[feature_name])
vocabularies[feature_name] = np.unique(np.concatenate(list(vocab)))
class DCN(tfrs.Model):
def __init__(self, use_cross_layer, deep_layer_sizes, projection_dim=None):
super().__init__()
self.embedding_dimension = 32
str_features = ["movie_id", "user_id", "user_zip_code",
"user_occupation_text"]
int_features = ["user_gender", "bucketized_user_age"]
self._all_features = str_features + int_features
self._embeddings = {}
# Compute embeddings for string features.
for feature_name in str_features:
vocabulary = vocabularies[feature_name]
self._embeddings[feature_name] = tf.keras.Sequential(
[tf.keras.layers.StringLookup(
vocabulary=vocabulary, mask_token=None),
tf.keras.layers.Embedding(len(vocabulary) + 1,
self.embedding_dimension)
])
# Compute embeddings for int features.
for feature_name in int_features:
vocabulary = vocabularies[feature_name]
self._embeddings[feature_name] = tf.keras.Sequential(
[tf.keras.layers.IntegerLookup(
vocabulary=vocabulary, mask_value=None),
tf.keras.layers.Embedding(len(vocabulary) + 1,
self.embedding_dimension)
])
if use_cross_layer:
self._cross_layer = tfrs.layers.dcn.Cross(
projection_dim=projection_dim,
kernel_initializer="glorot_uniform")
else:
self._cross_layer = None
self._deep_layers = [tf.keras.layers.Dense(layer_size, activation="relu")
for layer_size in deep_layer_sizes]
self._logit_layer = tf.keras.layers.Dense(1)
self.task = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError("RMSE")]
)
def call(self, features):
# Concatenate embeddings
embeddings = []
for feature_name in self._all_features:
embedding_fn = self._embeddings[feature_name]
embeddings.append(embedding_fn(features[feature_name]))
x = tf.concat(embeddings, axis=1)
# Build Cross Network
if self._cross_layer is not None:
x = self._cross_layer(x)
# Build Deep Network
for deep_layer in self._deep_layers:
x = deep_layer(x)
return self._logit_layer(x)
def compute_loss(self, features, training=False):
labels = features.pop("user_rating")
scores = self(features)
return self.task(
labels=labels,
predictions=scores,
)
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
def run_models(use_cross_layer, deep_layer_sizes, projection_dim=None, num_runs=5):
models = []
rmses = []
for i in range(num_runs):
model = DCN(use_cross_layer=use_cross_layer,
deep_layer_sizes=deep_layer_sizes,
projection_dim=projection_dim)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate))
models.append(model)
model.fit(cached_train, epochs=epochs, verbose=False)
metrics = model.evaluate(cached_test, return_dict=True)
rmses.append(metrics["RMSE"])
mean, stdv = np.average(rmses), np.std(rmses)
return {"model": models, "mean": mean, "stdv": stdv}
epochs = 8
learning_rate = 0.01
dcn_result = run_models(use_cross_layer=True,
deep_layer_sizes=[192, 192])
dcn_lr_result = run_models(use_cross_layer=True,
projection_dim=20,
deep_layer_sizes=[192, 192])
dnn_result = run_models(use_cross_layer=False,
deep_layer_sizes=[192, 192, 192])
print("DCN RMSE mean: {:.4f}, stdv: {:.4f}".format(
dcn_result["mean"], dcn_result["stdv"]))
print("DCN (low-rank) RMSE mean: {:.4f}, stdv: {:.4f}".format(
dcn_lr_result["mean"], dcn_lr_result["stdv"]))
print("DNN RMSE mean: {:.4f}, stdv: {:.4f}".format(
dnn_result["mean"], dnn_result["stdv"]))
model = dcn_result["model"][0]
mat = model._cross_layer._dense.kernel
features = model._all_features
block_norm = np.ones([len(features), len(features)])
dim = model.embedding_dimension
# Compute the norms of the blocks.
for i in range(len(features)):
for j in range(len(features)):
block = mat[i * dim:(i + 1) * dim,
j * dim:(j + 1) * dim]
block_norm[i,j] = np.linalg.norm(block, ord="fro")
plt.figure(figsize=(9,9))
im = plt.matshow(block_norm, cmap=plt.cm.Blues)
ax = plt.gca()
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
cax.tick_params(labelsize=10)
_ = ax.set_xticklabels([""] + features, rotation=45, ha="left", fontsize=10)
_ = ax.set_yticklabels([""] + features, fontsize=10)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Deep & Cross Network (DCN)
Step2: Toy Example
Step3: Let's generate the data that follows the distribution, and split the data into 90% for training and 10% for testing.
Step4: Model construction
Step5: Then, we specify the cross network (with 1 cross layer of size 3) and the ReLU-based DNN (with layer sizes [512, 256, 128])
Step6: Model training
Step7: Then, we define the number of epochs as well as the learning rate.
Step8: Alright, everything is ready now and let's compile and train the models. You could set verbose=True if you want to see how the model progresses.
Step9: Model evaluation
Step10: We see that the cross network achieved magnitudes lower RMSE than a ReLU-based DNN, with magnitudes fewer parameters. This has suggested the efficieny of a cross network in learning feaure crosses.
Step11: Darker colours represent stronger learned interactions - in this case, it's clear that the model learned that purchasing babanas and cookbooks together is important.
Step12: Next, we randomly split the data into 80% for training and 20% for testing.
Step13: Then, we create vocabulary for each feature.
Step14: Model construction
Step15: Model training
Step16: Let's define a function that runs a model multiple times and returns the model's RMSE mean and standard deviation out of multiple runs.
Step17: We set some hyper-parameters for the models. Note that these hyper-parameters are set globally for all the models for demonstration purpose. If you want to obtain the best performance for each model, or conduct a fair comparison among models, then we'd suggest you to fine-tune the hyper-parameters. Remember that the model architecture and optimization schemes are intertwined.
Step18: DCN (stacked). We first train a DCN model with a stacked structure, that is, the inputs are fed to a cross network followed by a deep network.
Step19: Low-rank DCN. To reduce the training and serving cost, we leverage low-rank techniques to approximate the DCN weight matrices. The rank is passed in through argument projection_dim; a smaller projection_dim results in a lower cost. Note that projection_dim needs to be smaller than (input size)/2 to reduce the cost. In practice, we've observed using low-rank DCN with rank (input size)/4 consistently preserved the accuracy of a full-rank DCN.
Step20: DNN. We train a same-sized DNN model as a reference.
Step21: We evaluate the model on test data and report the mean and standard deviation out of 5 runs.
Step22: We see that DCN achieved better performance than a same-sized DNN with ReLU layers. Moreover, the low-rank DCN was able to reduce parameters while maintaining the accuracy.
|
113 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {c:i for i, c in enumerate(vocab)}
int_to_vocab = {i:c for i, c in enumerate(vocab)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
dict_punctuation = {
'.':'||Period||',
',':'||Comma||',
'"':'||Quotation_Mark||',
';':'||Semicolon||',
'!':'||Exclamation_Mark||',
'?':'||Question_Mark||',
'(':'||Left_Parenthesis||',
')':'||Right_Parenthesis||',
'--':'||Dash||',
'\n':'||Return||'
}
return dict_punctuation
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name = 'input')
targets = tf.placeholder(tf.int32, [None, None], name = 'targets')
learning_rate = tf.placeholder(tf.float32, name = 'learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
lstm_layers = 1
keep_prob = 1
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
cell_state = cell.zero_state(batch_size, tf.float32)
cell_state = tf.identity(cell_state, name = 'initial_state')
return cell, cell_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name = 'final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
batch_count = len(int_text)//(batch_size * seq_length)
counter = (batch_size * seq_length)
final = []
row = []
for i in range(batch_count):
x = int_text[i * counter : (i + 1) * counter]
x = np.reshape(x, (batch_size, seq_length))
y = int_text[(i * counter) + 1 : ((i + 1) * counter) + 1]
y = np.reshape(y, (batch_size, seq_length))
row = np.array([x,y])
final.append(row)
return np.array(final)
# test = get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)
# print(test)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 10
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 100
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.1
# Show stats for every n number of batches
show_every_n_batches = 64
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name('input:0'), loaded_graph.get_tensor_by_name('initial_state:0'), loaded_graph.get_tensor_by_name('final_state:0'), loaded_graph.get_tensor_by_name('probs:0')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab.get(np.argmax(probabilities))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
114 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
np.argmax(np.bincount([0, 0, 1],
weights=[0.2, 0.2, 0.6]))
ex = np.array([[0.9, 0.1],
[0.8, 0.2],
[0.4, 0.6]])
p = np.average(ex,
axis=0,
weights=[0.2, 0.2, 0.6])
p
np.argmax(p)
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
from sklearn.preprocessing import LabelEncoder
from sklearn.externals import six
from sklearn.base import clone
from sklearn.pipeline import _name_estimators
import numpy as np
import operator
class MajorityVoteClassifier(BaseEstimator,
ClassifierMixin):
A majority vote ensemble classifier
Parameters
----------
classifiers : array-like, shape = [n_classifiers]
Different classifiers for the ensemble
vote : str, {'classlabel', 'probability'} (default='label')
If 'classlabel' the prediction is based on the argmax of
class labels. Else if 'probability', the argmax of
the sum of probabilities is used to predict the class label
(recommended for calibrated classifiers).
weights : array-like, shape = [n_classifiers], optional (default=None)
If a list of `int` or `float` values are provided, the classifiers
are weighted by importance; Uses uniform weights if `weights=None`.
def __init__(self, classifiers, vote='classlabel', weights=None):
self.classifiers = classifiers
self.named_classifiers = {key: value for key, value
in _name_estimators(classifiers)}
self.vote = vote
self.weights = weights
def fit(self, X, y):
Fit classifiers.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
y : array-like, shape = [n_samples]
Vector of target class labels.
Returns
-------
self : object
if self.vote not in ('probability', 'classlabel'):
raise ValueError("vote must be 'probability' or 'classlabel'"
"; got (vote=%r)"
% self.vote)
if self.weights and len(self.weights) != len(self.classifiers):
raise ValueError('Number of classifiers and weights must be equal'
'; got %d weights, %d classifiers'
% (len(self.weights), len(self.classifiers)))
# Use LabelEncoder to ensure class labels start with 0, which
# is important for np.argmax call in self.predict
self.lablenc_ = LabelEncoder()
self.lablenc_.fit(y)
self.classes_ = self.lablenc_.classes_
self.classifiers_ = []
for clf in self.classifiers:
fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y))
self.classifiers_.append(fitted_clf)
return self
def predict(self, X):
Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
Returns
----------
maj_vote : array-like, shape = [n_samples]
Predicted class labels.
if self.vote == 'probability':
maj_vote = np.argmax(self.predict_proba(X), axis=1)
else: # 'classlabel' vote
# Collect results from clf.predict calls
predictions = np.asarray([clf.predict(X)
for clf in self.classifiers_]).T
maj_vote = np.apply_along_axis(
lambda x:
np.argmax(np.bincount(x,
weights=self.weights)),
axis=1,
arr=predictions)
maj_vote = self.lablenc_.inverse_transform(maj_vote)
return maj_vote
def predict_proba(self, X):
Predict class probabilities for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
avg_proba : array-like, shape = [n_samples, n_classes]
Weighted average probability for each class per sample.
probas = np.asarray([clf.predict_proba(X)
for clf in self.classifiers_])
avg_proba = np.average(probas, axis=0, weights=self.weights)
return avg_proba
def get_params(self, deep=True):
Get classifier parameter names for GridSearch
if not deep:
return super(MajorityVoteClassifier, self).get_params(deep=False)
else:
out = self.named_classifiers.copy()
for name, step in six.iteritems(self.named_classifiers):
for key, value in six.iteritems(step.get_params(deep=True)):
out['%s__%s' % (name, key)] = value
return out
import os
from sklearn.tree import DecisionTreeClassifier, export_graphviz
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from time import time
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score , classification_report
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report
cols = ['age',
'sex',
'cp',
'trestbps',
'chol',
'fbs',
'restecg',
'thalach',
'exang',
'oldpeak',
'slope',
'ca',
'thal',
'num']
import pandas as pd
# read .csv from provided dataset
csv_filename="processed.cleveland.data"
# Seperator = ' ' i.e a single space
df=pd.read_csv(csv_filename,sep=',',names=cols)
df.head()
count0 = 0
for z in df['num']:
if z == 0:
count0 = count0 + 1
print (count0)
for v in df['num']:
if v != 0 :
df['num'].replace(v,1,inplace=True)
count0 = 0
for z in df['num']:
if z == 0:
count0 = count0 + 1
print (count0)
count0 = 0
for z in df['num']:
if z != 0:
count0 = count0 + 1
print (count0)
df.head()
df.replace("?",np.NaN,inplace=True)
df.dropna(axis=0, inplace=True, how='any')
df = df.reset_index(drop=True)
df.head()
features = df.columns[:-1]
features
X = df[features]
y = df['num']
y.unique()
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
X_train, X_test, y_train, y_test =\
train_test_split(X, y,
test_size=0.3,
random_state=1)
X_train.shape, y_train.shape
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
for f in range(5):
print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))
best_features = []
for i in indices[:5]:
best_features.append(features[i])
# Plot the top 5 feature importances of the forest
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(5), importances[indices][:5],
color="r", yerr=std[indices][:5], align="center")
plt.xticks(range(5), best_features)
plt.xlim([-1, 5])
plt.show()
from time import time
t0=time()
print ("DecisionTree")
dt = DecisionTreeClassifier(min_samples_split=20,random_state=99)
# dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)
clf_dt=dt.fit(X_train,y_train)
print ("Acurracy: ", clf_dt.score(X_test,y_test))
t1=time()
print ("time elapsed: ", t1-t0)
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(dt, X, y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (15, 20 , 25),
'clf__min_samples_leaf': (3, 5, 10)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
grid_search.fit(X_train, y_train)
print ('Best score: %0.3f' % grid_search.best_score_)
print ('Best parameters set:')
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print ('\t%s: %r' % (param_name, best_parameters[param_name]))
predictions = grid_search.predict(X_test)
print (classification_report(y_test, predictions))
t4=time()
print ("NaiveBayes")
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print ("Acurracy: ", clf_nb.score(X_test,y_test))
t5=time()
print ("time elapsed: ", t5-t4)
tt4=time()
print ("cross result========")
scores = cross_validation.cross_val_score(nb, X, y, cv=5)
print (scores)
print (scores.mean())
tt5=time()
print ("time elapsed: ", tt5-tt4)
t6=time()
print ("KNN")
knn = KNeighborsClassifier()
clf_knn=knn.fit(X_train, y_train)
print ("Acurracy: ", clf_knn.score(X_test,y_test) )
t7=time()
print ("time elapsed: ", t7-t6)
tt6=time()
print ("cross result========")
scores = cross_validation.cross_val_score(knn, X, y, cv=5)
print (scores)
print (scores.mean())
tt7=time()
print ("time elapsed: ", tt7-tt6)
t7=time()
print ("SVM")
svc = SVC()
clf_svc=svc.fit(X_train, y_train)
print ("Acurracy: ", clf_svc.score(X_test,y_test) )
t8=time()
print ("time elapsed: ", t8-t7)
tt7=time()
print ("cross result========")
scores = cross_validation.cross_val_score(svc, X,y, cv=5)
print (scores)
print (scores.mean())
tt8=time()
print ("time elapsed: ", tt7-tt6)
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import numpy as np
clf2 = DecisionTreeClassifier(max_depth=1,
criterion='entropy',
random_state=0)
clf3 = KNeighborsClassifier(n_neighbors=1,
p=2,
metric='minkowski')
pipe3 = Pipeline([['sc', StandardScaler()],
['clf', clf3]])
clf_labels = ['Decision Tree', 'KNN']
print('10-fold cross validation:\n')
for clf, label in zip([clf2, pipe3], clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
# Majority Rule (hard) Voting
mv_clf = MajorityVoteClassifier(
classifiers=[clf2, pipe3])
clf_labels += ['Majority Voting']
all_clf = [clf2, pipe3, mv_clf]
for clf, label in zip(all_clf, clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
colors = ['black', 'orange', 'blue', 'green']
linestyles = [':', '--', '-.', '-']
for clf, label, clr, ls \
in zip(all_clf,
clf_labels, colors, linestyles):
# assuming the label of the positive class is 1
y_pred = clf.fit(X_train,
y_train).predict_proba(X_test)[:, 1]
fpr, tpr, thresholds = roc_curve(y_true=y_test,
y_score=y_pred)
roc_auc = auc(x=fpr, y=tpr)
plt.plot(fpr, tpr,
color=clr,
linestyle=ls,
label='%s (auc = %0.2f)' % (label, roc_auc))
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1],
linestyle='--',
color='gray',
linewidth=2)
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.grid()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
mv_clf.get_params()
from sklearn.grid_search import GridSearchCV
params = {'decisiontreeclassifier__max_depth': [1,2],
'pipeline__clf__n_neighbors': [5,15,20]}
grid = GridSearchCV(estimator=mv_clf,
param_grid=params,
cv=10,
scoring='roc_auc')
grid.fit(X_train, y_train)
for params, mean_score, scores in grid.grid_scores_:
print("%0.3f+/-%0.2f %r"
% (mean_score, scores.std() / 2, params))
print('Best parameters: %s' % grid.best_params_)
print('Accuracy: %.2f' % grid.best_score_)
X = df[features]
y = df['num']
X.shape , y.shape
from sklearn.preprocessing import LabelEncoder
from sklearn.cross_validation import train_test_split
le = LabelEncoder()
y = le.fit_transform(y)
X_train, X_test, y_train, y_test =\
train_test_split(X, y,
test_size=0.40,
random_state=1)
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=None)
bag = BaggingClassifier(base_estimator=tree,
n_estimators=500,
max_samples=1.0,
max_features=1.0,
bootstrap=True,
bootstrap_features=False,
n_jobs=1,
random_state=1)
from sklearn.metrics import accuracy_score
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
bag = bag.fit(X_train, y_train)
y_train_pred = bag.predict(X_train)
y_test_pred = bag.predict(X_test)
bag_train = accuracy_score(y_train, y_train_pred)
bag_test = accuracy_score(y_test, y_test_pred)
print('Bagging train/test accuracies %.3f/%.3f'
% (bag_train, bag_test))
from sklearn.ensemble import AdaBoostClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=1)
ada = AdaBoostClassifier(base_estimator=tree,
n_estimators=500,
learning_rate=0.1,
random_state=0)
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
ada = ada.fit(X_train, y_train)
y_train_pred = ada.predict(X_train)
y_test_pred = ada.predict(X_test)
ada_train = accuracy_score(y_train, y_train_pred)
ada_test = accuracy_score(y_test, y_test_pred)
print('AdaBoost train/test accuracies %.3f/%.3f'
% (ada_train, ada_test))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Certain classifiers in scikit-learn can also return the probability of a predicted class label via the predict_proba method. Using the predicted class probabilities instead of the class labels for majority voting can be useful if the classifiers in our ensemble are well calibrated.
Step7: Putting everything together, let's now implement a MajorityVoteClassifier
Step8: A lot of comments are added to the code to better understand the individual parts. However, before we implement the remaining methods, let's take a quick break and discuss some of the code that may look confusing at first. We used the parent classes BaseEstimator and ClassifierMixin to get some base functionality for free, including the methods get_params and set_params to set and return the classifier's parameters as well as the score method to calculate the prediction accuracy, respectively. Also note that we imported six to make the MajorityVoteClassifier compatible with Python 2.7.
Step9: Thus we have our dataset. But we want our task to be a binary classification task i.e we would like to classify whether the patient has a heart disease or not. However our target variable 'num' contains 5 values 0,1,2,3,4. We would simply attempt to distinguish presence (values 1,2,3,4) from absence (value 0). We can make clean our target variable values accordingly
Step10: Our data contains 6 row with missing values. These values are represented by "?". So first we replace these "?" with NaN and then drop all rows which contain NaNs. We can simply achive this by doing the following
Step11: Now we can move on to classification of our data.
Step12: Our target variable y has 2 unique values 0 and 1. 0 means the patient doesn't have a heart disease; 1 means unfortunately he/she does.
Step13: Feature importances with forests of trees
Step14: Decision Tree accuracy and time elapsed caculation
Step15: Tuning our hyperparameters using GridSearch
Step16: Naive Bayes accuracy and time elapsed caculation
Step17: KNN accuracy and time elapsed caculation
Step18: SVM accuracy and time elapsed caculation¶
Step19: Using the training dataset, we now will train two different classifiers— a decision tree classifier, and a k-nearest neighbors classifier—and look at their individual performances via a 10-fold cross-validation
Step20: As you can see the accuracies o our individual classifiers are almost same and are on the high side. Now let's move on to the more exciting part and combine the individual classifiers for majority rule voting in our MajorityVoteClassifier
Step21: As we can see, the performance of the MajorityVotingClassifier has substantially improved over the individual classifiers in the 10-fold cross-validation evaluation.
Step22: As we can see in the resulting ROC, the ensemble classifier also performs well on
Step23: Based on the values returned by the get_params method, we now know how to access the individual classifier's attributes. Let's now tune the decision tree depth via a grid search for demonstration purposes. The code is as follows
Step24: After the grid search has completed, we can print the different hyperparameter
Step25: As we can see, we get the best cross-validation results when we choose a higher n_neighbors (n = 20) whereas the tree depth does not seem to affect the performance at all, suggesting that a decision stump is sufficient to separate the data. To remind ourselves that it is a bad practice to use the test dataset more than once for model evaluation, we are not going to estimate the generalization performance of the tuned hyperparameters in this section. We will move on swiftly to an alternative approach for ensemble learning
Step26: Next we encode the class labels into binary format and split the dataset into
Step27: A BaggingClassifier algorithm is already implemented in scikit-learn, which we can import from the ensemble submodule. Here, we will use an unpruned decision tree as the base classifier and create an ensemble of 500 decision trees fitted on different bootstrap samples of the training dataset
Step28: Next we will calculate the accuracy score of the prediction on the training and test dataset to compare the performance of the bagging classifier to the performance of a single unpruned decision tree. Based on the accuracy values, the unpruned decision tree predicts all class labels of the training samples correctly; however, the substantially lower test accuracy indicates high variance (overfitting) of the model
Step29: Although the training accuracies of the decision tree and bagging classifier are similar on the training set (both 1.0), we can see that the bagging classifier has a slightly better generalization performance as estimated on the test set.
Step30: As we can see, the decision tree stump seems to overfit the training data in contrast with the unpruned decision tree that we saw in the previous section.
|
115 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
#Physical Constants (SI units)
G=6.67e-11 #Universal Gravitational constant in m^3 per kg per s^2
AU=1.5e11 #Astronomical Unit in meters = Distance between sun and earth
daysec=24.0*60*60 #seconds in a day
#####run specific constants. Change as needed#####
#Masses in kg
Ma=6.0e24 #always set as smaller mass
Mb=2.0e30 #always set as larger mass
#Time settings
t=0.0 #Starting time
dt=.01*daysec #Time set for simulation
tend=300*daysec #Time where simulation ends
#Initial conditions (position [m] and velocities [m/s] in x,y,z coordinates)
#For Ma
xa=1.0*AU
ya=0.0
vxa=0.0
vya=30000.0
#For Mb
xb=0.0
yb=0.0
vxb=0.0
vyb=0.0
#Function to compute the force between the two objects
def Fg(Ma,Mb,G,xa,xb,ya,yb):
#Compute rx and ry between Ma and Mb
rx=xb-xa
ry=yb-ya#Write it in
#compute r^3, remembering r=sqrt(rx^2+ry^2)
r3=np.sqrt(rx**2+ry**2)**3 #Write in r^3 using the equation above. Make use of np.sqrt()
#Compute the force in Newtons. Use the equations above as a Guide!
fx=-G*Ma*Mb*rx/r3 #Write it in
fy=-G*Ma*Mb*ry/r3 #Write it in
return fx,fy #What do we return?
def simulate(Ma,Mb,G,xa,ya,vxa,vya,xb,yb,vxb,vyb):
t=0
#Run a loop for the simulation. Keep track of Ma and Mb posistions and velocites
#Initialize vectors (otherwise there is nothing to append to!)
xaAr=np.array([])
yaAr=np.array([])
vxaAr=np.array([])
vyaAr=np.array([])
xbAr=np.array([])#Write it in for Particle B
ybAr=np.array([])#Write it in for Particle B
vxbAr=np.array([])
vybAr=np.array([])
#using while loop method with appending. Can also be done with for loops
while t<tend: #Write the end condition here.
#Compute current force on Ma and Mb. Ma recieves the opposite force of Mb
fx,fy=Fg(Ma,Mb,G,xa,xb,ya,yb)
#Update the velocities and positions of the particles
vxa=vxa-fx*dt/Ma
vya=vya-fy*dt/Ma#Write it in for y
vxb=vxb+fx*dt/Mb#Write it in for x
vyb=vyb+fy*dt/Mb
xa=xa+vxa*dt
ya=ya+vya*dt#Write it in for y
xb=xb+vxb*dt#Write it in for x
yb=yb+vyb*dt
#Save data to lists
xaAr=np.append(xaAr,xa)
yaAr=np.append(yaAr,ya)
xbAr=np.append(xbAr,xb)#How will we append it here?
ybAr=np.append(ybAr,yb)
#update the time by one time step, dt
t=t+dt
return(xaAr,yaAr,xbAr,ybAr)
#####Reminder of specific constants. Change as needed#####
#Masses in kg
Ma=6.0e24 #always set as smaller mass
Mb=2.0e30 #always set as larger mass
#Time settings
t=0.0 #Starting time
dt=.01*daysec #Time set for simulation
tend=300*daysec #Time where simulation ends
#Intial conditions (posistion [m] and velocities [m/s] in x,y,z coordinates)
#For Ma
xa=1.0*AU
ya=0.0
vxa=0.0
vya=30000.0
#For Mb
xb=0.0
yb=0.0
vxb=0.0
vyb=0.0
#Do simulation with these parameters
xaAr,yaAr,xbAr,ybAr = simulate(Ma,Mb,G,xa,ya,vxa,vya,xb,yb,vxb,vyb)#Insert the variable for y position of B particle)
from IPython.display import Image
Image("Earth-Sun-averageResult.jpg")
plt.figure()
plt.plot(xaAr/AU,yaAr/AU)
plt.plot(xbAr/AU,ybAr/AU)#Add positions for B particle)
plt.show()
#Mass distribution parameters
Mave=7.0e24 #The average asteroid mass
Msigma=1.0e24 #The standard deviation of asteroid masses
Size=3 #The number of asteroids we wish to simulate
#Draw 3 masses from normally distributed asteroid mass distribution
MassAr = Msigma * np.random.randn(Size) + Mave #Add your normal a.k.a. Gaussian distribution function, noting that the input to your numpy random number generator function will be: (Size)
plt.figure()
for mass in MassAr:#What array should we loop over?:
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,vxa,vya,xb,yb,vyb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Mass = %.2e'%mass) #Provide labels for each asteroid mass so we can generate a legend.
#Pro tip: The percent sign replaces '%.2e' in the string with the variable formatted the way we want!
plt.legend()
plt.show()
#draw 5 normally distributed mass values using the above parameters:
Size=5
MassAr = Msigma * np.random.randn(Size) + Mave
plt.figure()
for mass in MassAr:
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,vxa,vya,xb,yb,vyb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Mass = %.2e'%mass)
plt.legend()
plt.show()
#Draw 3 velocities from normally distributed asteroid mass distribution
Size = 3
Dimensions = 2
Vave=20000 #The average asteroid velocity in m
Vsigma=6000 #The standard deviation of asteroid velocities in m
#You can make normal arrays with different dimensions! See: https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.randn.html
VelAr = Vsigma * np.random.randn(Size,Dimensions) + Vave #a 2D array
for v in VelAr:
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,v[0],v[1],xb,yb,vxb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Velocity of Ma: vx = %.2e, vy = %.2e'%(v[0],v[1]))
plt.legend()
plt.show()
from IPython.display import Image
Image(filename="fig_example.jpg")
SMALL_SIZE = 12
MEDIUM_SIZE = 15
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the x number labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the y numer labels
plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize
colors=['black','blue','orange']
markers=['x','*','+']
styles=['--','-',':']
plt.figure(figsize=(8,6))
dt=10*daysec #Increase time set for simulation to better show markers individually
for mass,color,mrk,sty in zip(MassAr,colors,markers,styles):
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,vxa,vya,xb,yb,vyb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Mass = %.2e'%mass, color=color, marker=mrk,linestyle=sty,linewidth=mass/Mave) #weighting width of lines by mass
plt.legend()
plt.title('Asteroid Trajectories')
plt.xlabel('x position (m)')
plt.ylabel('y position (m)')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we will define the physical constants of our system, which will also establish the unit system we have chosen. We'll use SI units here. Below, I've already created the constants. Make sure you understand what they are before moving on.
Step2: Next, we will need parameters for the simulation. These are known as initial condititons. For a 2 body gravitation problem, we'll need to know the masses of the two objects, the starting posistions of the two objects, and the starting velocities of the two objects.
Step3: It will be nice to create a function for the force between Ma and Mb. Below is the physics for the force of Ma on Mb. How the physics works here is not important for the moment. Right now, I want to make sure you can translate the math shown into a python function. (I'll show a picture of the physics behind this math for those interested.)
Step4: Now that we have our force function, we will make a new function which does the whole simulation for a set of initial conditions. We call this function 'simulate' and it will take all the initial conditions as inputs. It will loop over each time step and call the force function to find the new positions for the asteroids at each time step.
Step5: Now we will call our simulate function with the initial conditions we defined earlier! We will take the output of simulate and store the x and y positions of the two particles.
Step6: Now for the fun part (or not so fun part if your simulation has an issue), plot your results! This is something well covered in previous lectures. Show me a plot of (xa,ya) and (xb,yb). Does it look sort of familiar? Hopefully you get something like the below image (in units of AU).
Step7: Challenge #1
Step8: We now wish to draw a random sample of asteroid masses from this distribution (Hint
Step9: Now let's loop over our random asteroid sample, run simulate and plot the results, for each one!
Step10: Going further
Step11: Challenge #2
Step12: Additionally, publications won't always be printed in color, and not all readers have the ability to distinguish colors or text size in the same way, so differences in style improve accessibility as well.
|
116 | <ASSISTANT_TASK:>
Python Code:
print("Hello world!")
#wyrównanie do nawiasu otwierającego
foo = moja_dluga_funkcja(zmienna_jeden, zmienna_dwa
zmienna_trzy, zmienna_cztery)
# zwiększone wcięcia aby rozróżnić funkcję od ciała funkcji
def moja_dluga_funkcja(
zmienna_jeden, zmienna_dwa, zmienna_trzy,
zmienna_cztery):
print(zmienna_jeden)
foo = moja_dluga_funkcja(
zmienna_jeden, zmienna_dwa,
zmienna_trzy, zmienna_cztery)
# Może być w tym przypadku inna ilość wcięć niż 4
foo = moja_dluga_funkcja(
zmienna_jeden, zmienna_dwa,
zmienna_trzy, zmienna_cztery)
moja_lista = [
1, 2, 3,
4, 5, 6,
]
wynik = funkcja_przyjmujaca_argumenty(
'a', 'b', 'c',
'd', 'e', 'f',
)
# dozwolone też wyrównanie z początkiem lini otwierającej wielowierszową konstrukcję
moja_lista = [
1, 2, 3,
4, 5, 6,
]
wynik = funkcja_przyjmujaca_argumenty(
'a', 'b', 'c',
'd', 'e', 'f',
)
przychod = (zarobki_brutto
+ zwrot_z_podatku
+ (dywidendy - podatek_od_dywidend)
- ubezpieczenie_samochodu
- kredyt_studencki)
#Dobrze:
import os
import sys
#Źle:
import sys, os
#Dobrze:
from subprocess import Popen, PIPE
Przykładowy moduł.
Coś tutaj się dzieje.
from __future__ import jakiś_moduł
__all__ = ['a', 'b', 'c']
__version__ = '0.1'
__author__ = 'Andrzej Krawczyk'
import os
import sys
#Dobrze:
def average(count, length=5):
return foo(c=count, l=length)
#Źle:
def complex(count, length = 5):
return foo(c = count, l = length)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Uruchomiene Aplikacji
|
117 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
import pandas as pd
import matplotlib.pylab as plt
import seaborn as sns
import statsmodels.api as sm
# read the data and inspect
titanic = pd.read_csv('titanic-data.csv')
print titanic.info()
titanic.head()
# drop those columns we are not interested in.
titanic.drop(["Name", "Ticket", "Cabin", "Embarked", "Fare"], axis=1, inplace=True)
# helpers ----------------------------
def value_in_range(series, min, max):
assert pd.to_numeric(series, errors="raise").all() # make sure those are numbers
return min <= series.min() and series.max() <= max
# ------------------------------------
# Sex can either be male or female
assert titanic["Sex"].isin(["male", "female"]).all() == True
# Survived
assert titanic["Survived"].isin([0,1]).all() == True
# Pclass should be either 1, 2, or 3 and
assert titanic["Pclass"].isin([1, 2, 3]).all() == True
# Age should be sensible, say between 0 to 100
AgeSeries = titanic[titanic["Age"].isnull() == False]["Age"]
assert value_in_range(AgeSeries, 0, 100) == True
# using seaborn's pointplot to visually explore the relationship between individual categorical varibles
sns.set_style("whitegrid")
g = sns.PairGrid(titanic, x_vars=["Sex", "Pclass", "SibSp", "Parch"], y_vars=["Survived"], size=4)
g.map(sns.pointplot, ci=99)
g.axes[0,0].set_ylabel("survival rate")
g.fig.suptitle("Point Plots")
# ploting the kernal distribution for age
figure = plt.figure()
ax_top = figure.add_subplot(211)
ax_top.set_xlim(0,85)
ax_top.set_xlabel("Age")
ax_top.set_ylabel("Proportional of Population")
ax_top.set_title("Kernal Density Estimate for Age grouped by survival")
ax_bottom = figure.add_subplot(212)
ax_bottom.set_xlim(0,85)
ax_bottom.set_title("Boxplot for Age distribution grouped by survival")
x = titanic[titanic["Survived"] == 1]
y = titanic[titanic["Survived"] == 0]
_ = sns.kdeplot(x["Age"].dropna(),
label="survived == True",
cut= True, shade=True,
ax=ax_top)
_ = sns.kdeplot(y["Age"].dropna(),
label="survived == False",
cut=True, shade=True,
ax=ax_top)
_ = sns.boxplot(x="Age",
y="Survived",
data=titanic.dropna(subset = ["Age"]),
orient="h",
ax=ax_bottom)
plt.tight_layout()
# Drop data points if contain NA in any feature.
titanic_dropna = titanic.dropna(subset=["Survived", "Age", "Sex", "Pclass", "SibSp", "Parch"])
# convert "Sex" to numberic representation, which is required for regressions.
titanic_dropna["Sex"] = titanic_dropna["Sex"].apply(lambda x: {"female": 0, "male": 1}[x])
dep = titanic_dropna["Survived"]
indep = titanic_dropna[["Sex", "Pclass", "SibSp", "Parch", "Age"]]
print(sm.Logit(dep, sm.add_constant(indep)).fit().get_margeff().summary())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Name and Embarked are dropped from the dataset because passenger name and embarking location shouldn't have any meaningful correlation with their chance of surviving. Arguably, the embarking locations might give some indication of passengers' social-economical background. However, I will be in favor of using "Pclass", because it is specifically mentioned in the special notes in the data souce to be a proxy for social-economic status.
Step2: <hr>
Step3: Sex
Step4: Age
|
118 | <ASSISTANT_TASK:>
Python Code:
# TODO 1: Install TF.Text TensorFlow library
!pip install -q "tensorflow-text==2.8.*"
import tensorflow as tf
import tensorflow_text as text
hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'],
['the', '1990', 'transcript']])
references = tf.ragged.constant([['delta', 'air', 'lines', 'flight'],
['this', 'concludes', 'the', 'transcript']])
result = text.metrics.rouge_l(hypotheses, references)
print('F-Measure: %s' % result.f_measure)
print('P-Measure: %s' % result.p_measure)
print('R-Measure: %s' % result.r_measure)
# Compute ROUGE-L with alpha=0
result = text.metrics.rouge_l(hypotheses, references, alpha=0)
print('F-Measure (alpha=0): %s' % result.f_measure)
print('P-Measure (alpha=0): %s' % result.p_measure)
print('R-Measure (alpha=0): %s' % result.r_measure)
# TODO 2: Compute ROUGE-L with alpha=1
result = text.metrics.rouge_l(hypotheses, references, alpha=1)
print('F-Measure (alpha=1): %s' % result.f_measure)
print('P-Measure (alpha=1): %s' % result.p_measure)
print('R-Measure (alpha=1): %s' % result.r_measure)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Please ignore any incompatibility warnings and errors.
Step2: ROUGE-L
Step3: The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks.
Step4: ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall.
|
119 | <ASSISTANT_TASK:>
Python Code:
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex5 import *
print("Setup Complete")
# modules we'll use
import pandas as pd
import numpy as np
# helpful modules
import fuzzywuzzy
from fuzzywuzzy import process
import chardet
# read in all our data
professors = pd.read_csv("../input/pakistan-intellectual-capital/pakistan_intellectual_capital.csv")
# set seed for reproducibility
np.random.seed(0)
# convert to lower case
professors['Country'] = professors['Country'].str.lower()
# remove trailing white spaces
professors['Country'] = professors['Country'].str.strip()
# get the top 10 closest matches to "south korea"
countries = professors['Country'].unique()
matches = fuzzywuzzy.process.extract("south korea", countries, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
def replace_matches_in_column(df, column, string_to_match, min_ratio = 47):
# get a list of unique strings
strings = df[column].unique()
# get the top 10 closest matches to our input string
matches = fuzzywuzzy.process.extract(string_to_match, strings,
limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# only get matches with a ratio > 90
close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio]
# get the rows of all the close matches in our dataframe
rows_with_matches = df[column].isin(close_matches)
# replace all rows with close matches with the input matches
df.loc[rows_with_matches, column] = string_to_match
# let us know the function's done
print("All done!")
replace_matches_in_column(df=professors, column='Country', string_to_match="south korea")
countries = professors['Country'].unique()
# TODO: Your code here
#%%RM_IF(PROD)%%
unis = professors['Graduated from'].unique()
# sort them alphabetically and then take a closer look
unis.sort()
unis
# Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
#_COMMENT_IF(PROD)_
q1.hint()
# TODO: Your code here
____
# Check your answer
q2.check()
#%%RM_IF(PROD)%%
q2.assert_check_failed()
#%%RM_IF(PROD)%%
professors['Graduated from'] = professors['Graduated from'].str.strip()
q2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q2.hint()
#_COMMENT_IF(PROD)_
q2.solution()
# get all the unique values in the 'City' column
countries = professors['Country'].unique()
# sort them alphabetically and then take a closer look
countries.sort()
countries
# TODO: Your code here!
____
# Check your answer
q3.check()
#%%RM_IF(PROD)%%
q3.assert_check_failed()
#%%RM_IF(PROD)%%
matches = fuzzywuzzy.process.extract("usa", countries, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
replace_matches_in_column(df=professors, column='Country', string_to_match="usa", min_ratio=70)
#q3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get our environment set up
Step2: Next, we'll redo all of the work that we did in the tutorial.
Step3: 1) Examine another column
Step4: Do you notice any inconsistencies in the data? Can any of the inconsistencies in the data be fixed by removing white spaces at the beginning and end of cells?
Step5: 2) Do some text pre-processing
Step6: 3) Continue working with countries
Step7: Take another look at the "Country" column and see if there's any more data cleaning we need to do.
|
120 | <ASSISTANT_TASK:>
Python Code:
import torch
import numpy as np
torch.__version__
arr = np.array([1,2,3,4,5])
print(arr)
print(arr.dtype)
print(type(arr))
x = torch.from_numpy(arr)
# Equivalent to x = torch.as_tensor(arr)
print(x)
# Print the type of data held by the tensor
print(x.dtype)
# Print the tensor object type
print(type(x))
print(x.type()) # this is more specific!
arr2 = np.arange(0.,12.).reshape(4,3)
print(arr2)
x2 = torch.from_numpy(arr2)
print(x2)
print(x2.type())
# Using torch.from_numpy()
arr = np.arange(0,5)
t = torch.from_numpy(arr)
print(t)
arr[2]=77
print(t)
# Using torch.tensor()
arr = np.arange(0,5)
t = torch.tensor(arr)
print(t)
arr[2]=77
print(t)
data = np.array([1,2,3])
a = torch.Tensor(data) # Equivalent to cc = torch.FloatTensor(data)
print(a, a.type())
b = torch.tensor(data)
print(b, b.type())
c = torch.tensor(data, dtype=torch.long)
print(c, c.type())
x = torch.empty(4, 3)
print(x)
x = torch.zeros(4, 3, dtype=torch.int64)
print(x)
x = torch.arange(0,18,2).reshape(3,3)
print(x)
x = torch.linspace(0,18,12).reshape(3,4)
print(x)
x = torch.tensor([1, 2, 3, 4])
print(x)
print(x.dtype)
print(x.type())
x = torch.FloatTensor([5,6,7])
print(x)
print(x.dtype)
print(x.type())
x = torch.tensor([8,9,-3], dtype=torch.int)
print(x)
print(x.dtype)
print(x.type())
print('Old:', x.type())
x = x.type(torch.int64)
print('New:', x.type())
x = torch.rand(4, 3)
print(x)
x = torch.randn(4, 3)
print(x)
x = torch.randint(0, 5, (4, 3))
print(x)
x = torch.zeros(2,5)
print(x)
x2 = torch.randn_like(x)
print(x2)
x3 = torch.ones_like(x2)
print(x3)
torch.manual_seed(42)
x = torch.rand(2, 3)
print(x)
torch.manual_seed(42)
x = torch.rand(2, 3)
print(x)
x.shape
x.size() # equivalent to x.shape
x.device
x.layout
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Confirm you're using PyTorch version 1.1.0
Step2: Converting NumPy arrays to PyTorch tensors
Step3: Here <tt>torch.DoubleTensor</tt> refers to 64-bit floating point data.
Step4: Class constructors
Step5: Creating tensors from scratch
Step6: Initialized tensors with <tt>.zeros()</tt> and <tt>.ones()</tt>
Step7: Tensors from ranges
Step8: Tensors from data
Step9: Alternatively you can set the type by the tensor method used.
Step10: You can also pass the dtype in as an argument. For a list of dtypes visit https
Step11: Changing the dtype of existing tensors
Step12: Random number tensors
Step13: Random number tensors that follow the input size
Step14: The same syntax can be used with<br>
Step15: Setting the random seed
Step16: Tensor attributes
Step17: PyTorch supports use of multiple <a href='https
|
121 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import string
from pyspark import SparkContext
sc = SparkContext('local[*]')
ulysses = sc.textFile('data/Ulysses.txt')
ulysses.take(10)
num_lines = sc.accumulator(0)
def tokenize(line):
table = dict.fromkeys(map(ord, string.punctuation))
return line.translate(table).lower().strip().split()
def tokenize_count(line):
global num_lines
if line:
num_lines += 1
return tokenize(line)
counter = ulysses.flatMap(lambda line: tokenize_count(line)).countByValue()
counter['circle']
num_lines.value
from itertools import count
table = dict(zip(string.ascii_letters, count()))
def weight_first(line, table):
words = tokenize(line)
return sum(table.get(word[0], 0) for word in words if word.isalpha())
def weight_last(line, table):
words = tokenize(line)
return sum(table.get(word[-1], 0) for word in words if word.isalpha())
ulysses.map(lambda line: weight_first(line, table)).sum()
ulysses.map(lambda line: weight_last(line, table)).sum()
table_bc = sc.broadcast(table)
def weight_first_bc(line, table):
words = tokenize(line)
return sum(table.value.get(word[0], 0) for word in words if word.isalpha())
def weight_last_bc(line, table):
words = tokenize(line)
return sum(table.value.get(word[-1], 0) for word in words if word.isalpha())
ulysses.map(lambda line: weight_first_bc(line, table_bc)).sum()
ulysses.map(lambda line: weight_last_bc(line, table_bc)).sum()
def fake_data(n, val):
users = list(map(''.join, np.random.choice(list(string.ascii_lowercase), (n,2))))
comments = [val]*n
return tuple(zip(users, comments))
data = fake_data(10000, 'a')
list(data)[:10]
rdd = sc.parallelize(data).reduceByKey(lambda x, y: x+y)
new_data = fake_data(1000, 'b')
list(new_data)[:10]
rdd_new = sc.parallelize(new_data).reduceByKey(lambda x, y: x+y).cache()
rdd_updated = rdd.join(rdd_new)
rdd_updated.take(10)
rdd2 = sc.parallelize(data).reduceByKey(lambda x, y: x+y)
rdd2 = rdd2.partitionBy(10).cache()
rdd2_updated = rdd2.join(rdd_new)
rdd2_updated.take(10)
%%file foo.cpp
#include <iostream>
#include <sstream>
#include <string>
#include <numeric>
#include <vector>
using namespace std;
double sum_squares(double x, double y) {
return x + y*y;
};
int main() {
string s;
while (cin) {
getline(cin, s);
stringstream stream(s);
vector<double> v;
while(1) {
double u;
stream >> u;
if(!stream)
break;
v.push_back(u);
}
if (v.size()) {
double x = accumulate(v.begin(), v.end(), 0.0, sum_squares);
cout << x << endl;
}
}
}
! g++ foo.cpp -o foo
xs = np.random.random((10, 3))
np.savetxt('numbers.txt', xs)
%%bash
./foo < numbers.txt
%%bash
cat numbers.txt | ./foo
!head numbers.txt
rdd = sc.textFile('numbers.txt')
from pyspark import SparkFiles
def prepare(line):
Each line contains numbers separated by a space.
return ' '.join(line.split()) + '\n'
# pipe data to external function
func = './foo'
sc.addFile(func)
ss = rdd.map(lambda s: prepare(s)).pipe(SparkFiles.get(func))
np.array(ss.collect(), dtype='float')
np.sum(xs**2, 1)
%load_ext version_information
%version_information pyspark, numpy
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Resources
Step2: Event counting
Step3: Broadcast Variables
Step4: The dictionary table is sent out twice to worker nodes, one for each call
Step5: Converting to use broadast variables is simple and more efficient
Step6: table_bc is sent to nodes only once.
Step7: The Spark Shuffle and Partitioning
Step8: Using partitionBy
Step9: Piping to External Programs
Step10: Feed data via re-direction
Step12: Feed data via piping
Step13: Version
|
122 | <ASSISTANT_TASK:>
Python Code:
pd.read_
"../class2/"
"data/Fatality.csv"
##Some code to run at the beginning of the file, to be able to show images in the notebook
##Don't worry about this cell
#Print the plots in this screen
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
#Using the symbol "#" you write comments
#printing something to the screen is easy:
print("Hello World")
#Now click on the play button in the toolbar above (or click Ctrl + Enter)
Image("./images/dashboard_files_tab.png",width=500)
Image("./images/dashboard_files_tab_new.png",width=200)
Image("./images/dashboard_files_tab_btns.png",width=400)
Image("./images/edit_mode.png")
Image("./images/command_mode.png")
Image("./images/menubar_toolbar.png")
#Let's say that a = 5, and ask jupyter with help with a. We'll see more on this later.
#Select this cell and run it (Ctrl + Enter)
a = 5.3
a?
## HOW TO IMPORT PACKAGES AND READ A CSV (we'll learn this in one hour)
#Standard mode
import pandas
spreadsheet = pandas.read_csv("data/class1_test_csv.csv")
#Standard mode with packages that have long names
import pandas as pd
spreadsheet = pd.read_csv("data/class1_test_csv.csv")
#Standard mode when you only want to import one function
from pandas import read_csv
spreadsheet = read_csv("data/class1_test_csv.csv")
#Import everything, DO NOT USE! It's against the Zen of Python (https://www.python.org/dev/peps/pep-0020/)
from pandas import *
spreadsheet = read_csv("data/class1_test_csv.csv")
#Let's install the package pandas, which is used to plot
!pip install pandas
print(type(3))
print(type(3.5))
print(type("I'm a string"))
print(type(False))
print(type(None))
##Using python as a calculator
print(5+2) #5+2
print(5*2) #5x2
print(5/2) #5/2
print(5**2) #5^2, 3 to the power of two
print(5%2) #This is called modulo, and gives you the remainder when you divide 5 by 2 (5/2 = 5*2 + 1)
##We can also "assign" the number to a "variable".
#The variable name can be whatever you want, but cannot start with a number and CANNOT spaces.
#Please use variable names that describe what they represent
#"grades_hw_1" is much better than "a14"
var1 = 5
var2 = 2
print(var1+var2) #5+2
print(var1*var2) #5x2
print(var1/var2) #5/2
print(var1**var2) #5^2, 3 to the power of two
print(var1%var2) #This is called modulo, and gives you the remainder when you divide 5 by 2 (5/2 = 5*2 + 1)
Image(url="https://upload.wikimedia.org/wikipedia/commons/c/c4/Utf8webgrowth.svg")
v = 5.321233258340857891
print("The value was {}".format(v))
print("eggs")
print("eggs" + "and" + "bacon") #concatenating strings
print("eggs" + " and " + "bacon") #concatenating strings with spaces
print("eggs and bacon".upper()) #upper case lower() for lower case
##String formatting. Each element inside format() is added in the place of each {}
print("{} {} and {} = diabetes".format(5,"sausages","bacon")) #used to format strings.
##Checking if string is contained
print("bacon" in "eggs and bacon") #checks if the string "bacon" is part of the string "eggs and bacon"
## We can also use variables
var1 = "eggs"
var2 = "bacon"
print(var1)
print(var1 + "and" + var2) #concatenating strings
print(var1 + " and " + var2) #concatenating strings with spaces
var_combined = var1 + " and " + var2
print(var_combined.upper()) #upper case lower() for lower case
##String formatting. Each element inside format() is added in the place of each {}
print("{} {} and {} = diabetes".format(5,var1,var2)) #used to format strings.
##Checking if string is contained
print("bacon" in var_combined) #checks if the string "bacon" is part of the string "eggs and bacon"
var_combined
#lower and upper case are different characters
print("bacon" in var_combined)
print("bacon in var_combined: ", "bacon" in var_combined)
print("bacon == var1: ","bacon" == var1) ##look at the == symol
print("bacon == var2: ", "bacon" == var2)
print("3 > 5: ", 3 > 5)
print("3 < 5: ", 3 < 5)
## OPERATIONS ON DATA TYPES
#Tells the computer that b = 3
b = 3
#Tells the computer that b = 5
b = 5
#Asks if b is equal to 3
print(b)
#The computer prints a (3)
a = 3
print(a)
#The computer doesn't print anything
a = 3
print()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. About python
Step2: 1.1 Jupyter notebook
Step3: NEW NOTEBOOK
Step4: BUTTONS TO REMOVE AND RENAME
Step5: CELLS IN JUPYTER NOTEBOOKS
Step6: COMMAND MODE
Step7: RUN PYTHON
Step8: WHY JUPYTER NOTEBOOKS
Step9: 1.2 Python packages
Step10: To install new packages you can use pip. For example run the code cell below
Step11: 1.3 Python
Step12: 2.1.1.1 Numbers
Step13: 2.1.1.2 Strings
Step14: 2.1.1.3 Booleans (True/False)
Step15: 2.* How Python reads your code
Step16: Tell the computer exactly what you want to do on what variable
|
123 | <ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import xarray as xr
import datetime
import numpy as np
from dask.distributed import LocalCluster, Client
import s3fs
import cartopy.crs as ccrs
import boto3
import matplotlib.pyplot as plt
bucket = 'era5-pds'
#Make sure you provide / in the end
prefix = 'zarr/2008/01/data/'
client = boto3.client('s3')
result = client.list_objects(Bucket=bucket, Prefix=prefix, Delimiter='/')
for o in result.get('CommonPrefixes'):
print (o.get('Prefix'))
client = Client()
client
fs = s3fs.S3FileSystem(anon=False)
def inc_mon(indate):
if indate.month < 12:
return datetime.datetime(indate.year, indate.month+1, 1)
else:
return datetime.datetime(indate.year+1, 1, 1)
def gen_d_range(start, end):
rr = []
while start <= end:
rr.append(start)
start = inc_mon(start)
return rr
def get_z(dtime,var):
f_zarr = 'era5-pds/zarr/{year}/{month:02d}/data/{var}.zarr/'.format(year=dtime.year, month=dtime.month,var=var)
return xr.open_zarr(s3fs.S3Map(f_zarr, s3=fs))
def gen_zarr_range(start, end,var):
return [get_z(tt,var) for tt in gen_d_range(start, end)]
%%time
tmp_a = gen_zarr_range(datetime.datetime(1979,1,1), datetime.datetime(2020,3,31),'air_temperature_at_2_metres')
tmp_all = xr.concat(tmp_a, dim='time0')
tmp = tmp_all.air_temperature_at_2_metres.sel(lon=slice(110,160),lat=slice(-10,-45)) - 272.15
sea_data = gen_zarr_range(datetime.datetime(2018,1,1), datetime.datetime(2018,1,1),'sea_surface_temperature')
sea_data_all = xr.concat(sea_data, dim='time0').sea_surface_temperature.sel(lon=slice(110,160),lat=slice(-10,-45))
sea_data_all0 = sea_data_all[0].values
mask = np.isnan(sea_data_all0)
tmp_masked = tmp.where(mask)
tmp_mean = tmp_masked.mean('time0').compute()
ax = plt.axes(projection=ccrs.Orthographic(130, -20))
tmp_mean.plot.contourf(ax=ax, transform=ccrs.PlateCarree())
ax.set_global()
ax.coastlines();
plt.draw()
yearly_tmp_AU = tmp_masked.groupby('time0.year').mean('time0').mean(dim=['lon','lat'])
f, ax = plt.subplots(1, 1)
yearly_tmp_AU.plot.line();
plt.draw()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we look into the era5-pds bucket zarr folder to find out what variables are available. Assuming that all the variables are available for all the years, we look into a random year-month data.
Step2: Here we define some functions to read in zarr data.
Step3: This is where we read in the data. We need to define the time range and variable name. In this example, we also choose to select only the area over Australia.
Step4: Here we read in an other variable. This time only for a month as we want to use it only for masking.
Step5: We decided to use sea surface temperature data for making a sea-land mask.
Step6: Mask out the data over the sea. To find out average temepratures over the land, it is important to mask out data over the ocean.
Step7: Now we plot the all time (1980-2019) average temperature over Australia. This time we decided to use only xarray plotting tools.
Step8: Now we are finding out yearly average temperature over the Australia land area.
|
124 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
# YOUR CODE HERE
x = np.hstack((np.arange(-5, 6), np.full(10, 5), np.arange(-5, 5), np.full(9, -5), [0]))
y = np.hstack((np.full(11, 5), np.arange(-5, 5), np.full(10, -5), np.arange(-4, 5), [0]))
f = np.hstack((np.zeros(40), [1]))
print(x)
print(y)
print(f)
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
# YOUR CODE HERE
xnew = np.linspace(-5, 5, 100)
ynew = np.linspace(-5, 5, 100)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Fnew = griddata((x, y), f, (Xnew, Ynew), method="cubic")
print(Fnew)
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
# YOUR CODE HERE
plt.contourf(Xnew, Ynew, Fnew, cmap="gnuplot2", levels=np.linspace(0, 1, 50))
plt.xlabel("X")
plt.ylabel("Y")
plt.colorbar(ticks=[0, 0.2, 0.4, 0.6, 0.8, 1])
assert True # leave this to grade the plot
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sparse 2d interpolation
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
|
125 | <ASSISTANT_TASK:>
Python Code:
%%bigquery
SELECT
bqutil.fn.median(ARRAY_AGG(TIMESTAMP_DIFF(a.creation_date, q.creation_date, SECOND))) AS time_to_answer
FROM `bigquery-public-data.stackoverflow.posts_questions` q
JOIN `bigquery-public-data.stackoverflow.posts_answers` a
ON q.accepted_answer_id = a.id
%%bigquery
WITH benchmark_eval AS (
SELECT
2120 - TIMESTAMP_DIFF(a.creation_date, q.creation_date, SECOND) AS error
FROM `bigquery-public-data.stackoverflow.posts_questions` q
JOIN `bigquery-public-data.stackoverflow.posts_answers` a
ON q.accepted_answer_id = a.id
)
SELECT
AVG(ABS(error)) AS mean_absolute_error
FROM
benchmark_eval
%%bigquery
SELECT
AVG(IF(a.last_edit_date IS NULL, 0, 1)) AS prob_edited
FROM `bigquery-public-data.stackoverflow.posts_questions` q
JOIN `bigquery-public-data.stackoverflow.posts_answers` a
ON q.accepted_answer_id = a.id
%%bigquery
SELECT
COUNTIF(ENDS_WITH(u.location, 'France')) / COUNT(u.location) AS from_france,
COUNTIF(ENDS_WITH(u.location, 'India')) / COUNT(u.location) AS from_india
FROM `bigquery-public-data.stackoverflow.posts_questions` q
JOIN `bigquery-public-data.stackoverflow.posts_answers` a
ON q.accepted_answer_id = a.id
JOIN `bigquery-public-data.stackoverflow.users` u
ON u.id = a.owner_user_id
%%bigquery
With trips AS (
SELECT
total_amount,
ST_Distance(ST_GeogPoint(pickup_longitude, pickup_latitude),
ST_GeogPoint(dropoff_longitude, dropoff_latitude))/1000 AS dist
FROM `bigquery-public-data.new_york.tlc_yellow_trips_2015`
WHERE pickup_latitude BETWEEN 35 and 45
AND dropoff_latitude BETWEEN 35 and 45
AND pickup_longitude BETWEEN -80 and -70
AND dropoff_longitude BETWEEN -80 and -70
AND total_amount IS NOT NULL
)
SELECT AVG(total_amount)/AVG(dist)
FROM trips
%%bigquery
CREATE TEMPORARY FUNCTION is_peak_hour(start_date TIMESTAMP) aS
(EXTRACT(DAYOFWEEK FROM start_date) BETWEEN 2 AND 6 -- weekday
AND (
EXTRACT(HOUR FROM start_date) BETWEEN 6 AND 10
OR
EXTRACT(HOUR FROM start_date) BETWEEN 15 AND 18))
;
SELECT
start_station_name,
is_peak_hour(start_date) AS is_peak,
AVG(duration) AS predicted_duration,
FROM `bigquery-public-data.london_bicycles.cycle_hire`
GROUP BY 1, 2
ORDER BY predicted_duration DESC
LIMIT 10
%%bigquery
CREATE TEMPORARY FUNCTION is_peak_hour(start_date TIMESTAMP) aS
(EXTRACT(DAYOFWEEK FROM start_date) BETWEEN 2 AND 6 -- weekday
AND (
EXTRACT(HOUR FROM start_date) BETWEEN 6 AND 10
OR
EXTRACT(HOUR FROM start_date) BETWEEN 15 AND 18))
;
WITH benchmark AS (
SELECT
start_station_name,
is_peak_hour(start_date) AS is_peak,
AVG(duration) AS predicted_duration,
FROM `bigquery-public-data.london_bicycles.cycle_hire`
GROUP BY 1, 2
)
SELECT
SQRT( SUM( (duration - predicted_duration)*(duration - predicted_duration)) / COUNT(duration) ) AS rmse
FROM `bigquery-public-data.london_bicycles.cycle_hire` c
JOIN benchmark b
ON c.start_station_name = b.start_station_name AND is_peak_hour(c.start_date) = b.is_peak
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Find the error metric of always predicting that it will take 2120 seconds to get an answer. This the baseline metric against which to report model performance.
Step2: 2. Classification on poorly understood features
Step3: Problem
Step4: 3. Regression with one good numeric feature
Step5: 4. Regression with one or two important features
Step6: Now, use this benchmark to compute the overall RMSE, so that you can compare with the model.
|
126 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from tqdm import tqdm_notebook
#import tqdm
import magpy as mp
%matplotlib inline
def e_anisotropy(moments, anisotropy_axes, V, K, particle_id):
cos_t = np.sum(moments[particle_id, :]*anisotropy_axes[particle_id, :])
return -K*V*cos_t**2
def e_dipole(moments, positions, Ms, V, particle_id):
mu_0 = mp.core.get_mu0()
mask = np.ones(moments.shape[0], dtype=bool)
mask[particle_id] = False
rs = positions[mask]-positions[particle_id, :]
mod_rs = np.linalg.norm(rs, axis=1)
rs[:, 0] = rs[:, 0] / mod_rs
rs[:, 1] = rs[:, 1] / mod_rs
rs[:, 2] = rs[:, 2] / mod_rs
m1_m2 = np.sum(moments[particle_id, :]*moments[mask], axis=1)
m1_r = np.sum(moments[particle_id, :]*rs, axis=1)
m2_r = np.sum(moments[mask]*rs, axis=1)
numer = (V**2)*(Ms**2)*mu_0*(3*m1_r*m2_r - m1_m2)
denom = 4*np.pi*np.power(mod_rs, 3)
return -np.sum(numer/denom)
def e_total(moments, positions, anisotropy_axes, Ms, V, K, particle_id):
return (
e_dipole(moments, positions, Ms, V, particle_id)
+ e_anisotropy(moments, anisotropy_axes, V, K, particle_id)
)
def sphere_point():
theta = 2*np.pi*np.random.rand()
phi = np.arccos(1-2*np.random.rand())
return np.array([np.sin(phi)*np.cos(theta), np.sin(phi)*np.sin(theta), np.cos(phi)])
def MH(positions, ani_axis, spins, Neq, Nsamps, SampRate, Ms, V, K, T, seed=42):
np.random.seed(seed)
k_b = mp.core.get_KB()
test = np.copy(spins)
Ntot = Neq+Nsamps*SampRate
Out = np.zeros([spins.shape[0], spins.shape[1], Nsamps])
ns = 0
for n in tqdm_notebook(range(Ntot)):
# pick a random spin
i = int(np.random.rand(1)*positions.shape[0])
# pick a random dir
test[i, :] = sphere_point()
dE = e_total(test, positions, ani_axis, Ms, V, K, i) - \
e_total(moments, positions, ani_axis, Ms, V, K, i)
if(np.random.rand(1) < np.exp(-dE/(k_b*T))):
spins[i, :] = test[i, :]
else:
test[i, :] = spins[i, :]
if (n >= Neq and (n-Neq)%SampRate == 0):
Out[:, :, ns] = np.copy(spins)
ns += 1
return Out
N = 2 # Two particles
T = 330 # temperature
K = 1e5 # anisotropy strength
R = 9e-9 # distance between two particles
r = 7e-9 # radius of the particles
V = 4./3 * np.pi * r**3 # volume of particle
Ms = 4e5 # saturation magnetisation
# particle 1 particle 2
positions = np.array([[0., 0., 0.], [0., 0., R]])
moments = np.array([sphere_point(), sphere_point()])
anisotropy_axes = np.array([[0., 0., 1.], [0., 0., 1.]])
output = MH(positions, anisotropy_axes, moments, 100000, 600000, 20, Ms, V, K, T, 0)
thetas = np.arccos(output[:, 2, :])
plt.hist(thetas[0], bins=50, normed=True)
plt.title('Magnetisation angle histogram (MCMC)')
plt.xlabel('Magnetisation angle $\\theta$ rads')
plt.ylabel('Probability $p(\\theta)$');
# additionally we must specify damping
alpha = 0.1
# We build a model of the two particles
base_model = mp.Model(
anisotropy=[K,K],
anisotropy_axis=anisotropy_axes,
damping=alpha,
location=positions,
magnetisation=Ms,
magnetisation_direction=moments,
radius=[r, r],
temperature=T
)
# Create an ensemble of 50,000 identical models
ensemble = mp.EnsembleModel(50000, base_model)
res = ensemble.simulate(end_time=1e-9, time_step=1e-12,
max_samples=500, random_state=1002,
n_jobs=-1, implicit_solve=True,
interactions=True)
m_z0 = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1 = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0 = np.arccos(m_z0)
theta1 = np.arccos(m_z1)
plt.hist(theta0, bins=50, alpha=0.5, normed=True, label='magpy')
plt.hist(thetas[0], bins=50, alpha=0.5, normed=True, label='MCMC')
plt.legend();
plt.xlabel('Magnetisation angle $\\theta$ (rads)')
plt.ylabel('Probability $p(\\theta)$');
fg, axs = plt.subplots(ncols=2, figsize=(11,4), sharey=True)
histdat = axs[0].hist2d(theta0, theta1, bins=16, normed=True)
axs[1].hist2d(thetas[0], thetas[1], bins=histdat[1], normed=True);
for ax, title in zip(axs, ['Magpy', 'MCMC']):
ax.set_xlabel('Magnetisation angle $\\theta_0$')
ax.set_ylabel('Magnetisation angle $\\theta_1$')
ax.set_title(title)
fg.colorbar(histdat[3], ax=axs.tolist());
from scipy.stats import gaussian_kde
kde = gaussian_kde(thetas)
tgrid_x = np.linspace(theta0.min(), theta0.max(), 16)
tgrid_y = np.linspace(theta1.min(), theta1.max(), 16)
tgrid_x, tgrid_y = np.meshgrid(tgrid_x, tgrid_y)
Z = np.reshape(kde(np.vstack([tgrid_x.ravel(), tgrid_y.ravel()])).T, tgrid_x.shape)
fg, ax = plt.subplots(figsize=(9,5))
hist = ax.hist2d(theta0, theta1, bins=16, normed=True)
contour = ax.contour(tgrid_x, tgrid_y, Z, cmap='hot_r')
fg.colorbar(contour, label='MCMC')
fg.colorbar(hist[3], label='Magpy')
ax.set_xlabel('Magnetisation angle $\\theta_0$')
ax.set_ylabel('Magnetisation angle $\\theta_1$');
res_noi = ensemble.simulate(end_time=1e-9, time_step=1e-12,
max_samples=500, random_state=1002,
n_jobs=-1, implicit_solve=True,
interactions=False)
m_z0 = np.array([state['z'][0] for state in res_noi.final_state()])/Ms
m_z1 = np.array([state['z'][1] for state in res_noi.final_state()])/Ms
theta0_noi = np.arccos(m_z0)
theta1_noi = np.arccos(m_z1)
plt.hist(theta0, bins=50, normed=True, alpha=0.4, label='Magpy')
plt.hist(theta0_noi, bins=50, normed=True, alpha=0.4, label='Magpy (no inter.)');
plt.hist(thetas[0], bins=50, histtype='step', lw=2, normed=True, alpha=0.4, label='MCMC')
plt.legend();
plt.xlabel('Magnetisation angle $\\theta_0$ rads')
plt.ylabel('Probability $p(\\theta_0)$');
plt.title('Comparison of $\\theta_0$ distrubition');
fg, ax = plt.subplots(figsize=(9,5))
hist = ax.hist2d(theta0_noi, theta1_noi, bins=16, normed=True)
contour = ax.contour(tgrid_x, tgrid_y, Z, cmap='hot_r')
fg.colorbar(contour, label='MCMC')
fg.colorbar(hist[3], label='Magpy')
ax.set_xlabel('Magnetisation angle $\\theta_0$')
ax.set_ylabel('Magnetisation angle $\\theta_1$');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Metropolis MCMC
Step2: Dipolar interaction energy
Step3: Total energy
Step4: The Monte-Carlo algorithm
Step5: Parameter set up
Step6: Run the MCMC sampler!
Step7: Magpy - Dynamical Simulation
Step8: Simulate the ensemble!
Step9: Compute the final state
Step10: Compare results
Step11: The results look to be a good match!
Step12: Alternatively compare using a kernel density function
Step13: Sanity check
|
127 | <ASSISTANT_TASK:>
Python Code:
f1 = (1/2)*r_c*c**2+(1/4)*u_c*c**4+(1/6)*v_c*c**6-E*p+(1/2)*r_p*p**2-gamma*c*p
pmin = solve(f1.diff(c),p)[0]
pmin
E_cp = solve(f1.diff(p),E)[0]
E_cp
expand(E_cp.subs(p,pmin))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $\dfrac{\partial f_{1}(c,p)}{\partial p} = 0 = $
|
128 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 500
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
max = np.max(x)
min = np.min(x)
scale = max - min
return (x - min) / scale
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn.preprocessing import LabelBinarizer
label_count = 10
one_hot_encoder = LabelBinarizer()
one_hot_encoder.fit(list(range(label_count)))
def one_hot_encode(y):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
one_hots = one_hot_encoder.transform(y)
return one_hots
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
shape_with_batch = (None,) + image_shape
#print(shape_with_batch)
return tf.placeholder(tf.float32, shape=shape_with_batch, name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
import re
#weights = {}
#biases = {}
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
x_size = x_tensor.get_shape().as_list()
x_base_name = re.sub(':.*', '', x_tensor.name)
input_channels = x_size[3]
conv_weights = tf.Variable(tf.random_normal(conv_ksize + (input_channels, conv_num_outputs)),
name="weights_for_" + x_base_name)
conv_biases = tf.Variable(tf.zeros(conv_num_outputs), name="biases_for_" + x_base_name)
conv = tf.nn.conv2d(x_tensor, conv_weights, (1,) + conv_strides + (1,), 'SAME')
conv = tf.add(conv, conv_biases)
conv = tf.nn.relu(conv)
conv = tf.nn.max_pool(conv, (1,) + pool_ksize + (1,), (1,) + pool_strides + (1,), 'SAME')
return conv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.layers.dense(x_tensor, num_outputs, activation=tf.nn.relu)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.layers.dense(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
label_count = 10
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
model = conv2d_maxpool(x, 32, (3, 3), (2, 2), (2, 2), (2, 2))
#model = tf.nn.dropout(model, keep_prob)
model = conv2d_maxpool(model, 64, (2, 2), (1, 1), (2, 2), (1, 1))
#model = tf.nn.dropout(model, keep_prob)
model = conv2d_maxpool(model, 64, (1, 1), (1, 1), (1, 1), (1, 1))
#model = tf.nn.dropout(model, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
model = flatten(model)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
model = fully_conn(model, 512)
model = tf.nn.dropout(model, keep_prob)
model = fully_conn(model, 256)
model = tf.nn.dropout(model, keep_prob)
model = fully_conn(model, 128)
model = tf.nn.dropout(model, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
model = output(model, label_count)
# TODO: return output
return model
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss, acc = session.run([cost, accuracy], feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_loss, valid_acc = session.run([cost, accuracy], feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.})
print("Batch Loss= " + \
"{:.6f}".format(loss) + \
", Training Accuracy= " + \
"{:.5f}".format(acc) + \
", Validation Loss= " + \
"{:.6f}".format(valid_loss) + \
", Validation Accuracy= " + \
"{:.5f}".format(valid_acc))
# TODO: Tune Parameters
epochs = 1
batch_size = 128
keep_probability = 0.7
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
129 | <ASSISTANT_TASK:>
Python Code:
from text import *
from utils import open_data
from notebook import psource
psource(UnigramWordModel, NgramWordModel, UnigramCharModel, NgramCharModel)
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['an'])
print(P2[('i', 'was')])
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P3 = NgramWordModel(3, wordseq)
print("Conditional Probabilities Table:", P3.cond_prob[('i', 'was')].dictionary, '\n')
print("Conditional Probability of 'once' give 'i was':", P3.cond_prob[('i', 'was')]['once'], '\n')
print("Next word after 'i was':", P3.cond_prob[('i', 'was')].sample())
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramCharModel(wordseq)
P2 = NgramCharModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['z'])
print(P2[('g', 'h')])
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
P3 = NgramWordModel(3, wordseq)
print(P1.samples(10))
print(P2.samples(10))
print(P3.samples(10))
data = open_data("EN-text/flatland.txt").read()
data += open_data("EN-text/sense.txt").read()
wordseq = words(data)
P3 = NgramWordModel(3, wordseq)
P4 = NgramWordModel(4, wordseq)
P5 = NgramWordModel(5, wordseq)
P7 = NgramWordModel(7, wordseq)
print(P3.samples(15))
print(P4.samples(15))
print(P5.samples(15))
print(P7.samples(15))
psource(viterbi_segment)
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P = UnigramWordModel(wordseq)
text = "itiseasytoreadwordswithoutspaces"
s, p = viterbi_segment(text,P)
print("Sequence of words is:",s)
print("Probability of sequence is:",p)
psource(IRSystem)
psource(UnixConsultant)
uc = UnixConsultant()
q = uc.query("how do I remove a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
q = uc.query("how do I delete a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
plaintext = "ABCDWXYZ"
ciphertext = shift_encode(plaintext, 3)
print(ciphertext)
print(bigrams('this is a sentence'))
%psource ShiftDecoder
plaintext = "This is a secret message"
ciphertext = shift_encode(plaintext, 13)
print('The code is', '"' + ciphertext + '"')
flatland = open_data("EN-text/flatland.txt").read()
decoder = ShiftDecoder(flatland)
decoded_message = decoder.decode(ciphertext)
print('The decoded message is', '"' + decoded_message + '"')
psource(PermutationDecoder)
ciphertexts = ['ahed world', 'ahed woxld']
pd = PermutationDecoder(canonicalize(flatland))
for ctext in ciphertexts:
print('"{}" decodes to "{}"'.format(ctext, pd.decode(ctext)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CONTENTS
Step2: Next we build our models. The text file we will use to build them is Flatland, by Edwin A. Abbott. We will load it from here. In that directory you can find other text files we might get to use here.
Step3: We see that the most used word in Flatland is 'the', with 2081 occurences, while the most used sequence is 'of the' with 368 occurences. Also, the probability of 'an' is approximately 0.003, while for 'i was' it is close to 0.001. Note that the strings used as keys are all lowercase. For the unigram model, the keys are single strings, while for n-gram models we have n-tuples of strings.
Step4: First we print all the possible words that come after 'i was' and the times they have appeared in the model. Next we print the probability of 'once' appearing after 'i was', and finally we pick a word to proceed after 'i was'. Note that the word is picked according to its probability of appearing (high appearance count means higher chance to get picked).
Step5: The most common letter is 'e', appearing more than 19000 times, and the most common sequence is "_t". That is, a space followed by a 't'. Note that even though we do not count spaces for word models or unigram character models, we do count them for n-gram char models.
Step6: For the unigram model, we mostly get gibberish, since each word is picked according to its frequency of appearance in the text, without taking into consideration preceding words. As we increase n though, we start to get samples that do have some semblance of conherency and do remind a little bit of normal English. As we increase our data, these samples will get better.
Step7: Notice how the samples start to become more and more reasonable as we add more data and increase the n parameter. We are still a long way to go though from realistic text generation, but at the same time we can see that with enough data even rudimentary algorithms can output something almost passable.
Step8: The function takes as input a string and a text model, and returns the most probable sequence of words, together with the probability of that sequence.
Step9: The algorithm correctly retrieved the words from the string. It also gave us the probability of this sequence, which is small, but still the most probable segmentation of the string.
Step10: The stopwords argument signifies words in the queries that should not be accounted for in documents. Usually they are very common words that do not add any significant information for a document's relevancy.
Step11: The class creates an IR System with the stopwords "how do i the a of". We could add more words to exclude, but the queries we will test will generally be in that format, so it is convenient. After the initialization of the system, we get the manual files and start indexing them.
Step12: We asked how to remove a file and the top result was the rm (the Unix command for remove) manual. This is exactly what we wanted! Let's try another query
Step13: Even though we are basically asking for the same thing, we got a different top result. The diff command shows the differences between two files. So the system failed us and presented us an irrelevant document. Why is that? Unfortunately our IR system considers each word independent. "Remove" and "delete" have similar meanings, but since they are different words our system will not make the connection. So, the diff manual which mentions a lot the word delete gets the nod ahead of other manuals, while the rm one isn't in the result set since it doesn't use the word at all.
Step14: Decoding a Caesar cipher
Step15: We use CountingProbDist to get the probability distribution of bigrams. In the latin alphabet consists of only only 26 letters. This limits the total number of possible substitutions to 26. We reverse the shift encoding for a given n and check how probable it is using the bigram distribution. We try all 26 values of n, i.e. from n = 0 to n = 26 and use the value of n which gives the most probable plaintext.
Step16: Example
Step17: Permutation Decoder
Step18: Each state/node in the graph is represented as a letter-to-letter map. If there no mapping for a letter it means the letter is unchanged in the permutation. These maps are stored as dictionaries. Each dictionary is a 'potential' permutation. We use the word 'potential' because every dictionary doesn't necessarily represent a valid permutation since a permutation cannot have repeating elements. For example the dictionary {'A'
|
130 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.random.randn(100)
y = np.random.randn(100)
plt.scatter(x, y, marker='*', color='red');
x = np.random.randn(100)
plt.hist(x, bins=5);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scatter plots
Step2: Histogram
|
131 | <ASSISTANT_TASK:>
Python Code:
for i in range(3):
a = i * 7 #0, 7, 14
b = i + 2 #2, 3, 4
c = a * b # 0, 21, 56
#만약 이 range값이 3017, 5033일 경우에는 무슨 값인지 알 수 없다. 이 때 쉽게 a,b,c값이 무엇인지 찾는 방법을 소개
name = "KiPyo Kim"
age = 29
from IPython import embed
embed()
for i in range(3):
a = i * 7
b = i + 2
c = a * b
embed()
from IPython import embed; embed() #보통 디버깅 할 때 이렇게 해서 단축키로 지정하고 많이 쓴다.
for i in range(100):
from IPython import embed; embed()
#이렇게 하면 무한루프에 빠진다. 커널을 종료하는 수밖에...
def fibo(n):
if n <= 0:
return 0
if n == 1:
return 1
embed()
return fibo(n-1) + fibo(n-2)
fibo(5)
class Student():
def __init__(self, name, age):
self.name = name
self.age = age
def introduce(self):
from IPython import embed; embed()
return (self.name, self.age)
kkp = Student("kipoy", 29)
kkp.introduce()
def fibo(n):
if n <= 0:
return 0
if n == 1:
return 1
print("fibo({n}) = fibo({n_1}) + fibo({n-2})".format(n=n, n_1=n-1, n_2=n-2))
return fibo(n-1) + fibo(n-2)
fibo(3)
fibo(10)
import logging
# logging -> debug, info, warning, danger, critical...
logging.warning("hello world")
name #뒤에 따라 오는 주석들 Traceback 이 있다.
if True
print("hello world")
ab
def error(a, b):
a = b + 1
print(a)
error
2 + "김기표"
{}["somtthing"]
{}.append()
with open("./not_exist_file", "r") as f:
pass
NameError
Exception?
def append_string_to_hello(string):
return "hello, " + string
append_string_to_hello("world")
append_string_to_hello(13)
"hello, " + 3
#str + int, implicit(함축적)한 방법이다.
#Python은 형변환이 되지 않기 때문이다.(형변환 잘 되는 것은 루비)
#자유도가 높거나 해서 좋은 언어는 아니다. 언어 특성에 맞게 사용하면 된다.
"hello" + str(3)
# str + str(int), explicit(명시적)한 방법이다.
awesome_list = ["world", "hello", "python", 5678, "fastcampus"]
for awesome in awesome_list:
# 예외 처리가 가능한 장소 [1] => 밑에 있는 케이스에서만 예외처리가 가능
print(append_string_to_hello(awesome))
def append_string_to_hello(string):
# 예외처리가 가능한 장소(2) => 함수 불렀을 모든 경우에서 예외 처리 가능
# 그래서 2번에서 해보겠다.
return "hello, " + string
def append_string_to_hello(string):
# 예외처리가 가능한 장소
# 예외처리 => try:-except: (항상 이런 방법으로 한다.)
try:
return "hello, " + string
except TypeError as err:
print("오류" * 40)
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
#TypeError는 class다. 즉 에러 하나하나 자체는 클래스로 만들어진 에러 객체이다.
print(err)
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
return err
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except: #이렇게 할 경우 모든 에러에 대해 예외처리. 일반적으로 많이 사용
return err
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except {TypeError, AttributeError} as err: #이렇게도 쓰나 잘 쓰진 않는다.
return err
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
#알람을 받을 수 있도록 넣을수도 있다.
print("send_sms")
raise
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
print("send_sms")
raise
except AttributeError as err:
pass
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
print("send_sms")
print(err)
# raise
except AttributeError as err:
pass
finally: #예외가 발생했던 아니던 어쨌든 실행하고 싶을 때
print("어쨌던 끝남")
append_string_to_hello(123)
awesome_list = ["world", "hello", "python", 5678, "fastcampus"]
for awesome in awesome_list:
print(append_string_to_hello(awesome))
def fibo(x):
if x < 0:
# err = FibonacciShouldNotHaveNegativeNumberError()
# raise err
#일반적으로 위와 같은 2줄이 아닌 아래와 같은 1줄로 적는다.
raise FibonacciShouldNotHaveNegativeNumberError()
if x == 0:
return 0
if x == 1:
return 1
return fibo(x-1) + fibo(x-2)
#Exception Class
class FibonacciShouldNotHaveNegativeNumberError(Exception):
def __init__(self):
pass
fibo(-1)
raise FibonacciShouldNotHaveNegativeNumberError()
# 다른 에러들은 수정 요구 사항까지 나온다.
"hello, " + 5678
# 노란색 에러는 수정 요구 사항
# 이건 어디서 수정 가능하냐? def __str__(self): 여기서!
class FibonacciShouldNotHaveNegativeNumberError(Exception):
def __init__(self):
pass
def __str__(self):
return "피보나치 수열은 index 값으로 양수를 받아야 합니다"
raise FibonacciShouldNotHaveNegativeNumberError()
def fibo(x):
if x < 0:
raise FibonacciShouldNotHaveNegativeNumberError(x)
if x == 0:
return 0
if x == 1:
return 1
return fibo(x-1) + fibo(x-2)
class FibonacciShouldNotHaveNegativeNumberError(Exception):
def __init__(self, n):
self.n = n
def __str__(self):
return "이 함수는 index 값으로 양수를 받아야 합니다. (입력받은 값: {n})".format(n=self.n) #에러메세지 요구사항
fibo(-13)
def Factorial(n):
if n == 1:
return 1
return n * Factorial(n-1)
Factorial(5)
def Factorial(n):
if n < 0:
raise FactorialShouldGetPositiveIndexError(n)
if n == 1:
return 1
return n * Factorial(n-1)
class FactorialShouldGetPositiveIndexError(Exception):
def __init__(self, n):
self.n = n
def __str__(self):
return "factorial function should get positive index. (input: {n})".format(n=self.n)
Factorial(-3)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 디버깅? de+bugg+ing => 버그를 잡는다
Step2: 만약 안에 있는 것을 exit()로 종료하지 않고 dd로 밖에서 강제 종료할 경우
Step3: print를 쓰면 직관적으로 볼 수 있지만 안 좋은 이유는 결과에 영향을 미치기 때문
Step4: cmd에서 python으로 실행할 경우에는 바로 WARNING이 뜬다.
Step5: Error
Step6: 오류에는 2가지가 있다.
Step7: Exception이 뜨면 좋은 것이다. 일단 남이 의도하지 않은 방향대로 뭔가 클래스나 함수, 객체 등을 쓰고 있는 것이기 때문
Step8: 우리만의 예외처리 class를 만들어보겠다.
Step9: 상속 순서
Step10: 예외처리를 포함하여,
|
132 | <ASSISTANT_TASK:>
Python Code:
import xarray as xr
from matplotlib import pyplot as plt
%matplotlib inline
from oocgcm.oceanmodels.nemo import grids
#- Parameter
coordfile = '/Users/lesommer/data/NATL60/NATL60-I/NATL60_coordinates_v4.nc'
maskfile = '/Users/lesommer/data/NATL60/NATL60-I/NATL60_v4.1_cdf_byte_mask.nc'
filenatl60 = '/Users/lesommer/data/NATL60/NATL60-MJM155-S/1d/2008/NATL60-MJM155_y2008m01.1d_BUOYANCYFLX.nc'
#chunks = (3454,5422)
chunks = (1727,2711)
xr_chunks = {'x': chunks[-1], 'y': chunks[-2]}
xr_chunks_t = {'x': chunks[-1], 'y': chunks[-2],'time_counter':1}
#- creating the grid object
grd = grids.nemo_2d_grid(nemo_coordinate_file=coordfile,nemo_byte_mask_file=maskfile,chunks=xr_chunks)
#- defining a 2D xarray
sig0 = xr.open_dataset(filenatl60,chunks=xr_chunks,lock=False)['vosigma0'][20]
#- compute the horizontal gradient
g = grd.horizontal_gradient(sig0)
%time gx = g.x_component.load()
#- defining a 2D xarray
sig0 = xr.open_dataset(filenatl60,chunks=xr_chunks_t,lock=False)['vosigma0']
#- compute the horizontal gradient
g = grd.horizontal_gradient(sig0)
%time gx = g.x_component[20].load()
lap = grd.horizontal_laplacian(sig0)
%time l = lap[20].load()
#- plot the horizontal gradient
plt.figure(figsize=(15,10))
l.plot.pcolormesh(vmin=-1e-15,vmax=1e-15)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: with a 2D input xarray dataarray at a given time
Step2: with a 2D + time input xarray dataarray
Step3: Compute a laplacian
Step4: plotting the output array
|
133 | <ASSISTANT_TASK:>
Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.torch_imports import *
from fastai.transforms import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
from fastai.conv_learner import *
PATH = "data/dogbreeds/"
sz = 224
arch = resnext101_64
bs = 64
label_csv = f'{PATH}labels.csv'
n = len(list(open(label_csv)))-1
val_idxs = get_cv_idxs(n)
val_idxs, n, len(val_idxs)
!ls {PATH}
label_df = pd.read_csv(label_csv)
label_df.head()
# use Pandas to create pivot table which shows how many of each label:
label_df.pivot_table(index='breed', aggfunc=len).sort_values('id', ascending=False)
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, folder='train', csv_fname=f'{PATH}labels.csv',
test_name='test', val_idxs=val_idxs, suffix='.jpg',
tfms=tfms, bs=bs)
fn = PATH + data.trn_ds.fnames[0]; fn
img = PIL.Image.open(fn); img
img.size
size_d = {k: PIL.Image.open(PATH + k).size for k in data.trn_ds.fnames}
row_sz, col_sz = list(zip(*size_d.values()))
row_sz = np.array(row_sz); col_sz = np.array(col_sz)
row_sz[:5]
plt.hist(row_sz);
plt.hist(row_sz[row_sz < 1000])
plt.hist(col_sz);
plt.hist(col_sz[col_sz < 1000])
len(data.trn_ds), len(data.test_ds)
len(data.classes), data.classes[:5]
def get_data(sz, bs):
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, 'train', f'{PATH}labels.csv', test_name='test',
num_workers=4, val_idxs=val_idxs, suffix='.jpg',
tfms=tfms, bs=bs)
return data if sz > 300 else data.resize(340, 'tmp')
data = get_data(sz, bs)
learn = ConvLearner.pretrained(arch, data, precompute=True) # GTX870M;bs=64;sz=224;MEM:2431/3017
learn.fit(1e-2, 5)
from sklearn import metrics
# data = get_data(sz, bs)
learn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.5)
learn.fit(1e-2, 2)
lrf = learn.find_lr()
learn.sched.plot()
# turn precompute off then use dataug
learn.precompute = False
learn.fit(1e-2, 5, cycle_len=1)
learn.save('224_pre')
learn.load('224_pre')
learn.set_data(get_data(299, bs=32))
learn.freeze() # just making all but last layer already frozen
learn.fit(1e-2, 3, cycle_len=1) # precompute is off so DataAugmentation is back on
learn.fit(1e-2, 3, cycle_len=1, cycle_mult=2)
log_preds, y = learn.TTA()
probs = np.exp(log_preds)
accuracy(log_preds, y), metrics.log_loss(y, probs)
learn.save('299_pre')
# learn.load('299_pre')
learn.fit(1e-2, 1, cycle_len=2)
learn.save('299_pre')
log_preds, y = learn.TTA()
probs = np.exp(log_preds)
accuracy(log_preds, y), metrics.log_loss(y, probs)
SUBM = f'{PATH}subm/'
os.makedirs(SUBM, exist_ok=True)
df.to_csv(f'{SUBM}subm.gz', compression='gzip', index=False)
FileLink(f'{SUBM}subm.gz')
fn = data.val_ds.fnames[0]
fn
Image.open(PATH+fn).resize((150,150))
trn_tfms, val_tfms = tfms_from_model(arch, sz)
learn = ConvLearner.pretrained(arch, data)
learn.load('299_pre')
# ds = FilesIndexArrayDataset([fn], np.array([0]), val_tfms, PATH)
# dl = DataLoader(ds)
# preds = learn.predict_dl(dl)
# np.argmax(preds)
im = trn_tfms(Image.open(PATH+fn))
preds = to_np(learn.model(V(T(im[None]).cuda())))
np.argmax(preds)
trn_tfms, val_tfms = tfms_from_model(arch, sz)
im = val_tfms(Image.open(PATH+fn)) # or could apply trn_tfms(.)
preds = learn.predict_array(im[None]) # index into image as[None] to create minibatch of 1 img
np.argmax(preds)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Initial Exploration
Step2: 3. Initial Model
Step3: 3.1 Precompute
Step4: 3.2 Augment
Step5: 3.3 Increase Size
Step6: 6. Individual Prediction
|
134 | <ASSISTANT_TASK:>
Python Code:
# using Tensorflow 2
%tensorflow_version 2.x
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print("Tensorflow version: " + tf.__version__)
#@title Display utilities [RUN ME]
from enum import IntEnum
import numpy as np
class Waveforms(IntEnum):
SINE1 = 0
SINE2 = 1
SINE3 = 2
SINE4 = 3
def create_time_series(waveform, datalen):
# Generates a sequence of length datalen
# There are three available waveforms in the Waveforms enum
# good waveforms
frequencies = [(0.2, 0.15), (0.35, 0.3), (0.6, 0.55), (0.4, 0.25)]
freq1, freq2 = frequencies[waveform]
noise = [np.random.random()*0.2 for i in range(datalen)]
x1 = np.sin(np.arange(0,datalen) * freq1) + noise
x2 = np.sin(np.arange(0,datalen) * freq2) + noise
x = x1 + x2
return x.astype(np.float32)
from matplotlib import transforms as plttrans
plt.rcParams['figure.figsize']=(16.8,6.0)
plt.rcParams['axes.grid']=True
plt.rcParams['axes.linewidth']=0
plt.rcParams['grid.color']='#DDDDDD'
plt.rcParams['axes.facecolor']='white'
plt.rcParams['xtick.major.size']=0
plt.rcParams['ytick.major.size']=0
def picture_this_1(data, datalen):
plt.subplot(211)
plt.plot(data[datalen-512:datalen+512])
plt.axvspan(0, 512, color='black', alpha=0.06)
plt.axvspan(512, 1024, color='grey', alpha=0.04)
plt.subplot(212)
plt.plot(data[3*datalen-512:3*datalen+512])
plt.axvspan(0, 512, color='grey', alpha=0.04)
plt.axvspan(512, 1024, color='black', alpha=0.06)
plt.show()
def picture_this_2(data, batchsize, seqlen):
samples = np.reshape(data, [-1, batchsize, seqlen])
rndsample = samples[np.random.choice(samples.shape[0], 8, replace=False)]
print("Tensor shape of a batch of training sequences: " + str(rndsample[0].shape))
print("Random excerpt:")
subplot = 241
for i in range(8):
plt.subplot(subplot)
plt.plot(rndsample[i, 0]) # first sequence in random batch
subplot += 1
plt.show()
def picture_this_3(predictions, evaldata, evallabels, seqlen):
subplot = 241
for i in range(8):
plt.subplot(subplot)
#k = int(np.random.rand() * evaldata.shape[0])
l0, = plt.plot(evaldata[i, 1:], label="data")
plt.plot([seqlen-2, seqlen-1], evallabels[i, -2:], ":")
l1, = plt.plot([seqlen-1], [predictions[i]], "o", label='Predicted')
l2, = plt.plot([seqlen-1], [evallabels[i][-1]], "o", label='Ground Truth')
if i==0:
plt.legend(handles=[l0, l1, l2])
subplot += 1
plt.show()
def histogram_helper(data, title, last_label=None):
labels = ['RND', 'LAST', 'LAST2', 'LINEAR', 'DNN', 'CNN', 'RNN', 'RNN_N']
colors = ['#4285f4', '#34a853', '#fbbc05', '#ea4334',
'#4285f4', '#34a853', '#fbbc05', '#ea4334',
'#4285f4', '#34a853', '#fbbc05', '#ea4334']
fig = plt.figure(figsize=(7,4))
plt.xticks(rotation='40')
ymax = data[1]*1.3
plt.ylim(0, ymax)
plt.title(title, pad="20")
# remove data points where data is None
filtered = filter(lambda tup: tup[1] is not None, zip(labels, data, colors))
# split back into lists
labels, data, colors = map(list, zip(*filtered))
# replace last label is appropriate
if last_label is not None:
labels[-1] = last_label
# histogram plot
plt.bar(labels, data, color=colors)
# add values on histogram bars
for i, (_, v, color) in enumerate(zip(labels, data, colors)):
plt.gca().text(i-0.3, min(v, ymax)+0.02, "{0:.4f}".format(v), color=color, fontweight="bold")
plt.show()
def picture_this_hist_yours(data):
histogram_helper(data, 'RMSE: your model vs. other approaches',
last_label='Yours')
def picture_this_hist_all(data):
histogram_helper(data, 'RMSE: final comparison')
DATA_SEQ_LEN = 1024*128
data = np.concatenate([create_time_series(waveform, DATA_SEQ_LEN) for waveform in Waveforms]) # 4 different wave forms
picture_this_1(data, DATA_SEQ_LEN)
DATA_LEN = DATA_SEQ_LEN * 4 # since we concatenated 4 sequences
RNN_CELLSIZE = 32 # size of the RNN cells
SEQLEN = 16 # unrolled sequence length
BATCHSIZE = 32 # mini-batch size
LAST_N = SEQLEN//2 # loss computed on last N element of sequence in advanced RNN model
picture_this_2(data, BATCHSIZE, SEQLEN) # execute multiple times to see different sample sequences
# training to predict the same sequence shifted by one (next value)
labeldata = np.roll(data, -1)
# cut data into sequences
traindata = np.reshape(data, [-1, SEQLEN])
labeldata = np.reshape(labeldata, [-1, SEQLEN])
# make an evaluation dataset by cutting the sequences differently
evaldata = np.roll(data, -SEQLEN//2)
evallabels = np.roll(evaldata, -1)
evaldata = np.reshape(evaldata, [-1, SEQLEN])
evallabels = np.reshape(evallabels, [-1, SEQLEN])
def get_training_dataset(last_n=1):
dataset = tf.data.Dataset.from_tensor_slices(
(
traindata, # features
labeldata[:,-last_n:SEQLEN] # targets: the last element or last n elements in the shifted sequence
)
)
# Dataset API used here to put the dataset into shape
dataset = dataset.repeat()
dataset = dataset.shuffle(DATA_LEN//SEQLEN) # shuffling is important ! (Number of sequences in shuffle buffer: all of them)
dataset = dataset.batch(BATCHSIZE, drop_remainder = True)
return dataset
def get_evaluation_dataset(last_n=1):
dataset = tf.data.Dataset.from_tensor_slices(
(
evaldata, # features
evallabels[:,-last_n:SEQLEN] # targets: the last element or last n elements in the shifted sequence
)
)
# Dataset API used here to put the dataset into shape
dataset = dataset.batch(evaldata.shape[0], drop_remainder = True) # just one batch with everything
return dataset
train_ds = get_training_dataset()
for features, labels in train_ds.take(10):
print("input_shape:", features.numpy().shape, ", shape of labels:", labels.numpy().shape)
# this is how to create a Keras model from neural network layers
def compile_keras_sequential_model(list_of_layers, model_name):
# a tf.keras.Sequential model is a sequence of layers
model = tf.keras.Sequential(list_of_layers, name=model_name)
# to finalize the model, specify the loss, the optimizer and metrics
model.compile(
loss = 'mean_squared_error',
optimizer = 'rmsprop',
metrics = ['RootMeanSquaredError'])
# this prints a description of the model
model.summary()
return model
#
# three very simplistic "models" that require no training. Can you beat them ?
#
# SIMPLISTIC BENCHMARK MODEL 1
predict_same_as_last_value = lambda x: x[:,-1] # shape of x is [BATCHSIZE,SEQLEN]
# SIMPLISTIC BENCHMARK MODEL 2
predict_trend_from_last_two_values = lambda x: x[:,-1] + (x[:,-1] - x[:,-2])
# SIMPLISTIC BENCHMARK MODEL 3
predict_random_value = lambda x: tf.random.uniform(tf.shape(x)[0:1], -2.0, 2.0)
def model_layers_from_lambda(lambda_fn, input_shape, output_shape):
return [tf.keras.layers.Lambda(lambda_fn, input_shape=input_shape),
tf.keras.layers.Reshape(output_shape)]
model_layers_RAND = model_layers_from_lambda(predict_random_value, input_shape=[SEQLEN,], output_shape=[1,])
model_layers_LAST = model_layers_from_lambda(predict_same_as_last_value, input_shape=[SEQLEN,], output_shape=[1,])
model_layers_LAST2 = model_layers_from_lambda(predict_trend_from_last_two_values, input_shape=[SEQLEN,], output_shape=[1,])
#
# three neural network models for comparison, in increasing order of complexity
#
# BENCHMARK MODEL 4: linear model (RMSE: 0.215 after 10 epochs)
model_layers_LINEAR = [tf.keras.layers.Dense(1, input_shape=[SEQLEN,])] # output shape [BATCHSIZE, 1]
# BENCHMARK MODEL 5: 2-layer dense model (RMSE: 0.197 after 10 epochs)
model_layers_DNN = [tf.keras.layers.Dense(SEQLEN//2, activation='relu', input_shape=[SEQLEN,]), # input shape [BATCHSIZE, SEQLEN]
tf.keras.layers.Dense(1)] # output shape [BATCHSIZE, 1]
# BENCHMARK MODEL 6: convolutional (RMSE: 0.186 after 10 epochs)
model_layers_CNN = [
tf.keras.layers.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for conv model
tf.keras.layers.Conv1D(filters=8, kernel_size=4, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8]
tf.keras.layers.Conv1D(filters=16, kernel_size=3, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8]
tf.keras.layers.Conv1D(filters=8, kernel_size=1, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8]
tf.keras.layers.MaxPooling1D(pool_size=2, strides=2), # [BATCHSIZE, SEQLEN//2, 8]
tf.keras.layers.Conv1D(filters=8, kernel_size=3, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN//2, 8]
tf.keras.layers.MaxPooling1D(pool_size=2, strides=2), # [BATCHSIZE, SEQLEN//4, 8]
# mis-using a conv layer as linear regression :-)
tf.keras.layers.Conv1D(filters=1, kernel_size=SEQLEN//4, activation=None, padding="valid"), # output shape [BATCHSIZE, 1, 1]
tf.keras.layers.Reshape([1,]) ] # output shape [BATCHSIZE, 1]
# instantiate the benchmark models and train those that need training
steps_per_epoch = steps_per_epoch = DATA_LEN // SEQLEN // BATCHSIZE
NB_BENCHMARK_EPOCHS = 10
model_RAND = compile_keras_sequential_model(model_layers_RAND, "RAND") # Simplistic model without parameters. It needs no training.
model_LAST = compile_keras_sequential_model(model_layers_LAST, "LAST") # Simplistic model without parameters. It needs no training.
model_LAST2 = compile_keras_sequential_model(model_layers_LAST2, "LAST2") # Simplistic model without parameters. It needs no training.
model_LINEAR = compile_keras_sequential_model(model_layers_LINEAR, "LINEAR")
model_LINEAR.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS)
model_DNN = compile_keras_sequential_model(model_layers_DNN, "DNN")
model_DNN.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS)
model_CNN = compile_keras_sequential_model(model_layers_CNN, "CNN")
model_CNN.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS)
# evaluate the benchmark models
benchmark_models = [model_RAND, model_LAST, model_LAST2, model_LINEAR, model_DNN, model_CNN]
benchmark_rmses = []
for model in benchmark_models:
_, rmse = model.evaluate(get_evaluation_dataset(), steps=1)
benchmark_rmses.append(rmse)
# RNN model (RMSE: 0.164 after 10 epochs)
model_RNN = tf.keras.Sequential([
tf.keras.layers.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model
tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True), # output shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
tf.keras.layers.GRU(RNN_CELLSIZE), # keep only last output in sequence: output shape [BATCHSIZE, RNN_CELLSIZE]
tf.keras.layers.Dense(1) # output shape [BATCHSIZE, 1]
])
model_RNN.compile(
loss = 'mean_squared_error',
optimizer = 'rmsprop',
metrics = ['RootMeanSquaredError'])
model_RNN.summary()
# RNN model with loss computed on last N elements (RMSE: 0.163 after 10 epochs)
model_RNN_N = tf.keras.Sequential([
tf.keras.layers.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model
tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True),
tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True), # output shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(1)), # output shape [BATCHSIZE, SEQLEN, 1]
tf.keras.layers.Lambda(lambda x: x[:,-LAST_N:SEQLEN,0]) # last N item(s) in sequence: output shape [BATCHSIZE, LAST_N]
])
model_RNN_N.compile(
loss = 'mean_squared_error',
optimizer = 'rmsprop',
metrics = ['RootMeanSquaredError'])
model_RNN_N.summary()
# You can re-execute this cell to continue training
steps_per_epoch = DATA_LEN // SEQLEN // BATCHSIZE
NB_EPOCHS = 10 # use NB_EPOCHS=1 when coding your models
# use NB_EPOCHS=10 when training for real (benchmark models were trained for 10 epochs)
model = model_RNN_N # train your model: model_RNN or model_RNN_N
train_ds = get_training_dataset(last_n=LAST_N) # use last_n=LAST_N for model_RNN_N
history = model.fit(train_ds, steps_per_epoch=steps_per_epoch, epochs=NB_EPOCHS)
plt.plot(history.history['loss'])
plt.show()
# Here "evaluating" using the training dataset
eval_ds = get_evaluation_dataset(last_n=LAST_N) # use last_n=LAST_N for model_RNN_N
loss, your_rmse = model.evaluate(eval_ds, steps=1)
# NOTE: benchmark models were trained for 10 epochs
picture_this_hist_yours(benchmark_rmses + [your_rmse])
# execute multiple times to see different sample sequences
subset = np.random.choice(DATA_LEN//SEQLEN, 8) # pick 8 eval sequences at random
predictions = model.predict(evaldata[subset], steps=1) # prediction directly from numpy array
picture_this_3(predictions[:,-1], evaldata[subset], evallabels[subset], SEQLEN)
your_RNN_layers = [
tf.keras.layers.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model
tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True), # output shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
tf.keras.layers.GRU(RNN_CELLSIZE), # keep only last output in sequence: output shape [BATCHSIZE, RNN_CELLSIZE]
tf.keras.layers.Dense(1) # output shape [BATCHSIZE, 1]
]
assert len(your_RNN_layers)>0, "the model has no layers"
your_RNN_model = compile_keras_sequential_model(your_RNN_layers, 'RNN')
your_RNN_N_layers = [
tf.keras.layers.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model
tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True),
tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True), # output shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(1)), # output shape [BATCHSIZE, SEQLEN, 1]
tf.keras.layers.Lambda(lambda x: x[:,-LAST_N:SEQLEN,0]) # last N item(s) in sequence: output shape [BATCHSIZE, LAST_N]
]
assert len(your_RNN_layers)>0, "the model has no layers"
your_RNN_N_model = compile_keras_sequential_model(your_RNN_N_layers, 'RNN_N')
# train your models from scratch
your_RNN_model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS)
your_RNN_N_model.fit(get_training_dataset(last_n=LAST_N), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS)
# evaluate all models
rmses = []
benchmark_models = [model_RAND, model_LAST, model_LAST2, model_LINEAR, model_DNN, model_CNN]
for model in benchmark_models:
_, rmse = model.evaluate(get_evaluation_dataset(), steps=1)
rmses.append(rmse)
_, rmse = your_RNN_model.evaluate(get_evaluation_dataset(), steps=1)
rmses.append(rmse)
_, rmse = your_RNN_N_model.evaluate(get_evaluation_dataset(last_n=LAST_N), steps=1)
rmses.append(rmse)
picture_this_hist_all(rmses)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate fake dataset
Step2: Hyperparameters
Step3: Visualize training sequences
Step4: Prepare datasets
Step5: Peek at the data
Step6: Benchmark model definitions
Step7: RNN models
Step8: Training loop
Step9: Evaluation
Step10: Predictions
Step11: <a name="benchmark"></a>
|
135 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name = 'inputs_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation = None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation = None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
logits = tf.layers.dense(h1, 1, activation = None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
136 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
# Example of a picture
index = 23
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
# # ### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros(shape=(dim,1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T,X)+b) # compute activation
cost = (-1/m)*np.sum(Y*np.log(A)+(1-Y)*(np.log(1-A))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1/m)*np.dot(X,(A-Y).T)
db = (1/m)*np.sum(A-Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w,b,X,Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w-learning_rate*dw
b = b-learning_rate*db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T,X)+b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0,i]= 1 if A[0,i]>0.5 else 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 - Overview of the Problem set
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation
|
137 | <ASSISTANT_TASK:>
Python Code:
# We start with the usual import statements
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
# create a new model with all default parameters (except the grid size)
mymodel = climlab.EBM_annual(num_lat = 30)
# What did we just do?
print mymodel
mymodel.Ts
mymodel.Ts.shape
mymodel.lat
plt.plot(mymodel.lat, mymodel.Ts)
plt.xlabel('Latitude')
plt.ylabel('Temperature (deg C)')
# The dictionary of sub-processes:
mymodel.subprocess
# Make a copy of the original temperature array
initial = mymodel.Ts.copy()
# Take a single timestep forward!
mymodel.step_forward()
# Check out the difference
print mymodel.Ts - initial
# How long is a single timestep?
mymodel.timestep
mymodel.param
# Each climlab model has a dictionary called diagnostics.
# Let's look at the names of the fields in this dicionary:
mymodel.diagnostics.keys()
# We can access individual fields either through standard dictionary notation:
mymodel.diagnostics['ASR']
# Or using the more interactive 'dot' notation:
mymodel.ASR
# Let's use the diagnostics to make a plot of the current state of the radiation
plt.plot(mymodel.lat, mymodel.ASR, label='ASR')
plt.plot(mymodel.lat, mymodel.OLR, label='OLR')
plt.xlabel('Latitude')
plt.ylabel('W/m2')
plt.legend()
plt.grid()
climlab.global_mean(mymodel.net_radiation)
climlab.global_mean(mymodel.Ts)
# Loop 90 times for 1 year of simulation
for n in range(90):
mymodel.step_forward()
print 'Global mean temperature is %0.1f degrees C.' %climlab.global_mean(mymodel.Ts)
print 'Global mean energy imbalance is %0.1f W/m2.' %climlab.global_mean(mymodel.net_radiation)
mymodel.integrate_years(1.)
print 'Global mean temperature is %0.1f degrees C.' %climlab.global_mean(mymodel.Ts)
print 'Global mean energy imbalance is %0.1f W/m2.' %climlab.global_mean(mymodel.net_radiation)
plt.plot(mymodel.lat, mymodel.Ts)
plt.xlabel('Latitude')
plt.ylabel('Temperature (deg C)')
plt.grid()
plt.plot(mymodel.lat_bounds, mymodel.heat_transport())
plt.xlabel('Latitude')
plt.ylabel('PW')
plt.grid()
def ebm_plot(e, return_fig=False):
templimits = -20,32
radlimits = -340, 340
htlimits = -6,6
latlimits = -90,90
lat_ticks = np.arange(-90,90,30)
fig = plt.figure(figsize=(8,12))
ax1 = fig.add_subplot(3,1,1)
ax1.plot(e.lat, e.Ts)
ax1.set_ylim(templimits)
ax1.set_ylabel('Temperature (deg C)')
ax2 = fig.add_subplot(3,1,2)
ax2.plot(e.lat, e.ASR, 'k--', label='SW' )
ax2.plot(e.lat, -e.OLR, 'r--', label='LW' )
ax2.plot(e.lat, e.net_radiation, 'c-', label='net rad' )
ax2.plot(e.lat, e.heat_transport_convergence(), 'g--', label='dyn' )
ax2.plot(e.lat, e.net_radiation.squeeze() + e.heat_transport_convergence(), 'b-', label='total' )
ax2.set_ylim(radlimits)
ax2.set_ylabel('Energy budget (W m$^{-2}$)')
ax2.legend()
ax3 = fig.add_subplot(3,1,3)
ax3.plot(e.lat_bounds, e.heat_transport() )
ax3.set_ylim(htlimits)
ax3.set_ylabel('Heat transport (PW)')
for ax in [ax1, ax2, ax3]:
ax.set_xlabel('Latitude')
ax.set_xlim(latlimits)
ax.set_xticks(lat_ticks)
ax.grid()
if return_fig:
return fig
ebm_plot(mymodel)
model2 = climlab.EBM_annual(num_lat=30)
print model2
# The default initial temperature
print model2.Ts
# Now let's change it to be 15 degrees everywhere
model2.Ts[:] = 15.
print model2.Ts
model2.compute_diagnostics()
ebm_plot(model2)
# Code to generate individual frames for the animation
# You need to specify a valid path for your computer, or else this won't work
# Uncomment the code below to re-create the frames from the animation
#nsteps = 90
#for n in range(nsteps):
# fig = ebm_plot(model2, return_fig=True)
#
# filepath = '/Users/br546577/Desktop/temp/ebm_animation_frames/'
# filename = filepath + 'ebm_animation' + str(n).zfill(4) + '.png'
# fig.savefig(filename)
# plt.close()
#
# model2.step_forward()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is a climlab Process?
Step2: We have an array of temperatures in degrees Celsius. Let's see how big this array is
Step3: Every state variable exists on a spatial grid. In this case, the grid is an array of latitude points
Step4: We can, for example, plot the current temperature versus latitude
Step5: It is based on a very general concept of a model as a collection of individual, interacting processes.
Step6: So what does it do?
Step7: Looks like the temperature got a bit colder near the equator and warmer near the poles
Step8: This value is in seconds. It is actually 1/90th of a year (so, to step forward one year, we need 90 individual steps). This is a default value -- we could change it if we wanted to.
Step9: Accessing the model diagnostics
Step10: This plot shows that $ASR > OLR$ (system is gaining extra energy) across the tropics, and $ASR < OLR$ (system is losing energy) near the poles.
Step11: Running the model out to equilibrium
Step12: Since there is still a significant imbalance, we are not yet at equilibrium. We should step forward again.
Step13: We are now quite close to equilibrium. Let's make some plots
Step14: Let's make some nice plots of all the terms in the energy budget.
Step15: What if we start from a very different initial temperature?
Step16: Why is the heat transport zero everywhere?
|
138 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
USE_COLAB = False
if not USE_COLAB:
plt.rc("text", usetex=True)
import numpy as np
C = 10
alpha = -0.5
q = 0.9
num_iter = 10
sublinear = np.array([C * k**alpha for k in range(1, num_iter + 1)])
linear = np.array([C * q**k for k in range(1, num_iter + 1)])
superlinear = np.array([C * q**(k**2) for k in range(1, num_iter + 1)])
quadratic = np.array([C * q**(2**k) for k in range(1, num_iter + 1)])
plt.figure(figsize=(12,8))
plt.semilogy(np.arange(1, num_iter+1), sublinear,
label=r"Sublinear, $\alpha = -0.5$", linewidth=5)
plt.semilogy(np.arange(1, num_iter+1), superlinear, linewidth=5,
label=r"Superlinear, $q = 0.5, p=2$")
plt.semilogy(np.arange(1, num_iter+1), linear,
label=r"Linear, $q = 0.5$", linewidth=5)
plt.semilogy(np.arange(1, num_iter+1), quadratic,
label=r"Quadratic, $q = 0.5$", linewidth=5)
plt.xlabel("Number of iterations, $k$", fontsize=28)
plt.ylabel("Error rate upper bound", fontsize=28)
plt.legend(loc="best", fontsize=26)
plt.xticks(fontsize = 28)
_ = plt.yticks(fontsize = 28)
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rc("text", usetex=True)
import ipywidgets as ipywidg
import numpy as np
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
from tqdm import tqdm
f = lambda x: np.power(x, 2)
gradf = lambda x: 2 * x
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
def update(x0, step):
gd = methods.fo.GradientDescent(f, gradf, ss.ConstantStepSize(step))
_ = gd.solve(np.array([x0]), max_iter=10)
x_hist = gd.get_convergence()
x = np.linspace(-5, 5)
ax.clear()
ax.plot(x, f(x), color="r", label="$f(x) = x^2$")
y_hist = np.array([f(x) for x in x_hist])
x_hist = np.array(x_hist)
plt.quiver(x_hist[:-1], y_hist[:-1], x_hist[1:]-x_hist[:-1], y_hist[1:]-y_hist[:-1],
scale_units='xy', angles='xy', scale=1, width=0.005, color="green", label="Descent path")
ax.legend()
fig.canvas.draw()
step_slider = ipywidg.FloatSlider(value=0.8, min=0, max=1.2, step=0.1, description="Step")
x0_slider = ipywidg.FloatSlider(value=1.5, min=-4, max=4, step=0.1, description="Initial point")
_ = ipywidg.interact(update, x0=x0_slider, step=step_slider)
def plot_alpha(f, grad, x, h, alphas, beta1, beta2):
df = np.zeros_like(alphas)
for i, alpha in enumerate(alphas):
df[i] = f(x + alpha * h)
upper_bound = f(x) + beta1 * alphas * grad(x) * h
lower_bound = f(x) + beta2 * alphas * grad(x) * h
plt.plot(alphas, df, label=r"$f(x + \alpha h)$")
plt.plot(alphas, upper_bound, label="Upper bound")
plt.plot(alphas, lower_bound, label="Lower bound")
plt.xlabel(r"$\alpha$", fontsize=18)
plt.legend(loc="best", fontsize=18)
f = lambda x: x**2
grad = lambda x: 2 * x
beta1 = 0.1
beta2 = 0.9
x0 = 0.5
plot_alpha(f, grad, x0, -grad(x0), np.linspace(1e-3, 1.01, 10), beta1, beta2)
x_range = np.linspace(1e-10, 4)
plt.plot(x_range, x_range * np.log(x_range))
x0 = 1
f = lambda x: x * np.log(x)
grad = lambda x: np.log(x) + 1
beta1 = 0.3
beta2 = 0.7
plot_alpha(f, grad, x0, -grad(x0), np.linspace(1e-3, 0.9, 10), beta1, beta2)
def GradientDescent(f, gradf, x0, epsilon, num_iter, line_search,
disp=False, callback=None, **kwargs):
x = x0.copy()
iteration = 0
opt_arg = {"f": f, "grad_f": gradf}
for key in kwargs:
opt_arg[key] = kwargs[key]
while True:
gradient = gradf(x)
alpha = line_search(x, -gradient, **opt_arg)
x = x - alpha * gradient
if callback is not None:
callback(x)
iteration += 1
if disp:
print("Current function val =", f(x))
print("Current gradient norm = ", np.linalg.norm(gradf(x)))
if np.linalg.norm(gradf(x)) < epsilon:
break
if iteration >= num_iter:
break
res = {"x": x, "num_iter": iteration, "tol": np.linalg.norm(gradf(x))}
return res
def my_f(x, A):
return 0.5 * x.dot(A.dot(x))
def my_gradf(x, A):
return A.dot(x)
plt.rc("text", usetex=True)
gammas = [0.1, 0.5, 1, 2, 3, 4, 5, 10, 20, 50, 100, 1000, 5000, 10000]
# gammas = [1]
num_iter_converg = []
for g in gammas:
A = np.array([[1, 0],
[0, g]], dtype=np.float64)
f = lambda x: my_f(x, A)
gradf = lambda x: my_gradf(x, A)
# x0 = np.random.rand(A.shape[0])
# x0 = np.sort(x0)
# x0 = x0[::-1]
x0 = np.array([g, 1], dtype=np.float64)
# print x0[1] / x0[0]
gd = methods.fo.GradientDescent(f, gradf, ss.ExactLineSearch4Quad(A))
x = gd.solve(x0, tol=1e-7, max_iter=100)
num_iter_converg.append(len(gd.get_convergence()))
plt.figure(figsize=(8, 6))
plt.loglog(gammas, num_iter_converg)
plt.xticks(fontsize = 20)
plt.yticks(fontsize = 20)
plt.xlabel(r"$\gamma$", fontsize=20)
plt.ylabel(r"Number of iterations with $\varepsilon = 10^{-7}$", fontsize=20)
import numpy as np
n = 1000
m = 2000
A = np.random.rand(n, m)
x = cvx.Variable(n)
obj = cvx.Minimize(cvx.sum(-cvx.log(1 - A.T * x)) -
cvx.sum(cvx.log(1 - cvx.square(x))))
prob = cvx.Problem(obj)
prob.solve(solver="SCS", verbose=True)
x = x.value
print("Optimal value =", prob.value)
import cvxpy as cvx
print(cvx.installed_solvers())
# !pip install jax
# !pip install jaxlib
import jax.numpy as jnp
import jax
# from jax.config import config
# config.update("jax_enable_x64", True)
A = jnp.array(A)
print(A.dtype)
x0 = jnp.zeros(n)
f = lambda x: -jnp.sum(jnp.log(1 - A.T@x)) - jnp.sum(jnp.log(1 - x*x))
grad_f = lambda x: jnp.sum(A @ (jnp.diagflat(1 / (1 - A.T @ x))), \
axis=1) + 2 * x / (1 - jnp.power(x, 2))
grad_f_jax = jax.grad(f)
print(jnp.linalg.norm(grad_f(x0) - grad_f_jax(x0)))
gd = methods.fo.GradientDescent(f, grad_f_jax, ss.Backtracking("Armijo", rho=0.5, beta=0.1, init_alpha=1.))
x = gd.solve(x0, tol=1e-5, max_iter=100, disp=True)
x_conv = gd.get_convergence()
grad_conv = [jnp.linalg.norm(grad_f_jax(x)) for x in x_conv]
plt.figure(figsize=(8,6))
plt.semilogy(grad_conv, label=r"$\| f'(x_k) \|_2$")
plt.semilogy([np.linalg.norm(x - np.array(x_k)) for x_k in x_conv], label=r"$\|x_k - x^*\|_2$")
plt.semilogy([np.linalg.norm(prob.value - f(np.array(x_k))) for x_k in x_conv], label=r"$\|f(x_k) - f^*\|_2$")
plt.semilogy([np.linalg.norm(np.array(x_conv[i]) - np.array(x_conv[i+1])) for i in range(len(x_conv) - 1)], label=r"$\|x_k - x_{k+1}\|_2$")
plt.semilogy([np.linalg.norm(f(np.array(x_conv[i])) - f(np.array(x_conv[i+1]))) for i in range(len(x_conv) - 1)], label=r"$\|f(x_k) - f(x_{k+1})\|_2$")
plt.xlabel(r"Number of iteration, $k$", fontsize=20)
plt.ylabel(r"Convergence rate", fontsize=20)
plt.xticks(fontsize = 20)
plt.yticks(fontsize = 20)
plt.legend(loc="best", fontsize=20)
plt.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Значение теорем сходимости (Б.Т. Поляк Введение в оптимизацию, гл. 1, $\S$ 6)
Step2: $f(x) = x\log x$
Step3: Backtracking
Step4: Выбор шага
Step5: При неудачном начальном приближении сходимость для плохо обусловенной задачи очень медленная
Step6: Решение с помощью градиентного спуска
Step7: Подробнее про jax, его возможности и особенности можно посмотреть например тут
|
139 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import io
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas
import seaborn as sns
import skimage
import skimage.color
import skimage.data
import skimage.feature
import skimage.filters
import skimage.future
import skimage.io
import skimage.morphology
import skimage.segmentation
import skimage.transform
from google.cloud import vision
from google.cloud.vision import types
# first_recon.png was captured using an iPhone 7 Plus in a room illuminated with daylight. No flash.
path_im = "first_recon.png"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]=r"C:\Users\karhohs\Downloads\bgb-jupyter-0eee48249fae.json"
im = skimage.io.imread(path_im)
skimage.io.imshow(im)
path_im = "first_recon_scale1.png"
with io.open(path_im, 'rb') as image_file:
content = image_file.read()
image = types.Image(content=content)
client = vision.ImageAnnotatorClient()
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.ListFields)
im_hsv = skimage.color.rgb2hsv(im)
im_hsv_dict = {}
im_hsv_dict["hue"] = im_hsv[:,:,0].flatten()
im_hsv_dict["sat"] = im_hsv[:,:,1].flatten()
im_hsv_dict["val"] = im_hsv[:,:,2].flatten()
df_hsv = pandas.DataFrame.from_dict(im_hsv_dict)
sns.set(style="ticks", color_codes=True)
# Set up the matplotlib figure
f, axes = plt.subplots(1, 3, figsize=(20, 8), sharex=True)
sns.despine(left=True)
# hue
dplot_hue = sns.distplot(df_hsv["hue"], color="b", kde=False, ax=axes[0])
p_num = len(dplot_hue.patches)
cmap_hsv = plt.get_cmap("hsv", 50)
hsv_array = cmap_hsv(range(p_num))
for ind, p in enumerate(dplot_hue.patches):
p.set_facecolor(hsv_array[ind])
p.set_alpha(1.0)
# sat
dplot_hue = sns.distplot(df_hsv["sat"], color="k", kde=False, ax=axes[1])
# val
dplot_val = sns.distplot(df_hsv["val"], color="k", kde=False, ax=axes[2])
sns.palplot(hsv_array)
im2 = im_hsv[:,:,0]
im2 = im2 < 0.3
skimage.io.imshow(im2)
im_s = im_hsv[:,:,1]
im_s = skimage.morphology.erosion(im_s, skimage.morphology.selem.disk(11))
im_edge = skimage.filters.sobel(im_s)
thresh = skimage.filters.threshold_otsu(im_edge)
im_edge = im_edge > thresh
contours = skimage.measure.find_contours(skimage.img_as_float(im_edge), 0.99)
im_contour = skimage.img_as_uint(np.zeros_like(im_s))
for ind, obj in enumerate(contours):
for xy in obj:
im_contour[xy[0].astype(int), xy[1].astype(int)] = ind + 1
props = skimage.measure.regionprops(im_contour)
contour_props = {}
contour_props["area"] = [p["area"] for p in props]
contour_props["eccentricity"] = [p["eccentricity"] for p in props]
df_contour = pandas.DataFrame.from_dict(contour_props)
sns.distplot(df_contour["eccentricity"])
df_circular = df_contour.loc[(df_contour["area"] > 1000)]
candidate_circles = df_circular.index.tolist()
candidate_contours = [contours[i] for i in candidate_circles]
sns.distplot(df_circular["area"])
fig, ax = plt.subplots()
ax.imshow(np.zeros_like(im_s))
for n, contour in enumerate(candidate_contours):
ax.plot(contour[:, 1], contour[:, 0], linewidth=2)
ax.axis('image')
ax.set_xticks([])
ax.set_yticks([])
plt.axis()
plt.show()
im_gray = skimage.color.rgb2gray(im)
im_gray_small = skimage.transform.rescale(im2,0.125)
im_edge = skimage.filters.prewitt(im_gray_small)
im_edge = skimage.morphology.dilation(im_edge)
hough_radii = np.arange(15, 40, 10)
hough_res = skimage.transform.hough_circle(im_gray_small, 20)
accums, cx, cy, radii = skimage.transform.hough_circle_peaks(hough_res, hough_radii, total_num_peaks=3)
radii
skimage.io.imshow(im_edge)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = skimage.color.gray2rgb(im_gray_small)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = skimage.draw.circle_perimeter(center_y, center_x, radius)
image[circy, circx] = (220, 20, 20)
ax.imshow(image)
plt.show()
img = im
labels1 = skimage.segmentation.slic(img, compactness=30, n_segments=400)
out1 = skimage.color.label2rgb(labels1, img, kind='avg')
g = skimage.future.graph.rag_mean_color(img, labels1, mode='similarity')
labels2 = skimage.future.graph.cut_normalized(labels1, g)
out2 = skimage.color.label2rgb(labels2, img, kind='avg')
fig, ax = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(6, 8))
ax[0].imshow(out1)
ax[1].imshow(out2)
for a in ax:
a.axis('off')
plt.tight_layout()
segments = skimage.segmentation.felzenszwalb(im, scale=500.0, sigma=3.0, min_size=5)
skimage.io.imshow(segments)
segments = skimage.segmentation.active_contour(im)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Color threshold
Step2: Based on the histogram of the hue, threshold the hue such that only the yellowish colors remain.
Step3: Add the cities back using a hough transform.
|
140 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'product': [1179160, 1066490, 1148126, 1069104, 1069105, 1160330, 1069098, 1077784, 1193369, 1179741],
'score': [0.424654, 0.424509, 0.422207, 0.420455, 0.414603, 0.168784, 0.168749, 0.168738, 0.168703, 0.168684]})
products = [1066490, 1077784]
df.loc[~df['product'].isin(products), 'score'] *= 10
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
141 | <ASSISTANT_TASK:>
Python Code:
import kfp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
import kfp.components as comp
import datetime
import kubernetes as k8s
# Required Parameters
PROJECT_ID='<ADD GCP PROJECT HERE>'
GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
# Optional Parameters, but required for running outside Kubeflow cluster
# The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com'
# The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline'
# Examples are:
# https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com
# https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline
HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>'
# For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following
# will be needed to access the endpoint.
CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>'
OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>'
OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>'
# This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines'
# If you are not working with 'AI Platform Pipelines', this step is not necessary
! gcloud auth print-access-token
# Create kfp client
in_cluster = True
try:
k8s.config.load_incluster_config()
except:
in_cluster = False
pass
if in_cluster:
client = kfp.Client()
else:
if HOST.endswith('googleusercontent.com'):
CLIENT_ID = None
OTHER_CLIENT_ID = None
OTHER_CLIENT_SECRET = None
client = kfp.Client(host=HOST,
client_id=CLIENT_ID,
other_client_id=OTHER_CLIENT_ID,
other_client_secret=OTHER_CLIENT_SECRET)
%%bash
# Create folders if they don't exist.
mkdir -p tmp/components/mnist_training
# Create the Python file that lists GCS blobs.
cat > ./tmp/components/mnist_training/app.py <<HERE
import argparse
from datetime import datetime
import tensorflow as tf
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_file', type=str, required=True, help='Name of the model file.')
parser.add_argument(
'--bucket', type=str, required=True, help='GCS bucket name.')
args = parser.parse_args()
bucket=args.bucket
model_file=args.model_file
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(model.summary())
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()),
# Interrupt training if val_loss stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
]
model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
model.save(model_file)
from tensorflow import gfile
gcs_path = bucket + "/" + model_file
if gfile.Exists(gcs_path):
gfile.Remove(gcs_path)
gfile.Copy(model_file, gcs_path)
with open('/output.txt', 'w') as f:
f.write(gcs_path)
HERE
%%bash
# Create Dockerfile.
cat > ./tmp/components/mnist_training/Dockerfile <<EOF
FROM tensorflow/tensorflow:1.15.0-py3
WORKDIR /app
COPY . /app
EOF
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format(
PROJECT_ID=PROJECT_ID,
IMAGE_NAME=IMAGE_NAME,
TAG=TAG
)
APP_FOLDER='./tmp/components/mnist_training/'
# In the following, for the purpose of demonstration
# Cloud Build is choosen for 'AI Platform Pipelines'
# kaniko is choosen for 'full Kubeflow deployment'
if HOST.endswith('googleusercontent.com'):
# kaniko is not pre-installed with 'AI Platform Pipelines'
import subprocess
# ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER}
cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER]
build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
print(build_log)
else:
if kfp.__version__ <= '0.1.36':
# kfp with version 0.1.36+ introduce broken change that will make the following code not working
import subprocess
builder = kfp.containers._container_builder.ContainerBuilder(
gcs_staging=GCS_BUCKET + "/kfp_container_build_staging"
)
kfp.containers.build_image_from_working_dir(
image_name=GCR_IMAGE,
working_dir=APP_FOLDER,
builder=builder
)
else:
raise("Please build the docker image use either [Docker] or [Cloud Build]")
image_name = GCR_IMAGE
def mnist_train_op(model_file, bucket):
return dsl.ContainerOp(
name="mnist_training_container",
image='gcr.io/{}/mnist_training_kf_pipeline:latest'.format(PROJECT_ID),
command=['python', '/app/app.py'],
file_outputs={'outputs': '/output.txt'},
arguments=['--bucket', bucket, '--model_file', model_file]
)
# Define the pipeline
@dsl.pipeline(
name='Mnist pipeline',
description='A toy pipeline that performs mnist model training.'
)
def mnist_container_pipeline(
model_file: str = 'mnist_model.h5',
bucket: str = GCS_BUCKET
):
mnist_train_op(model_file=model_file, bucket=bucket).apply(gcp.use_gcp_secret('user-gcp-sa'))
pipeline_func = mnist_container_pipeline
experiment_name = 'minist_kubeflow'
arguments = {"model_file":"mnist_model.h5",
"bucket":GCS_BUCKET}
run_name = pipeline_func.__name__ + ' run'
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create client
Step2: Wrap an existing Docker container image using ContainerOp
Step3: Creating a Dockerfile
Step4: Build docker image
Step5: If you want to use docker to build the image
Step6: Define each component
Step7: Create your workflow as a Python function
Step8: Submit a pipeline run
|
142 | <ASSISTANT_TASK:>
Python Code:
from IPython import parallel
c=parallel.Client()
dview=c.direct_view()
dview.block=True
c.ids
import numpy as np
x=np.arange(100)
dview.scatter('x',x)
print c[0]['x']
print c[1]['x']
print c[-1]['x']
dview.execute('import numpy as np; y=np.sum(x)')
ys=dview.gather('y')
total=np.sum(ys)
print total
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check a number of cores
Step2: Simple parallel summation
Step3: Parallel sum
|
143 | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
# check tf version
print(tf.__version__)
a = tf.constant(2)
b = tf.constant(5)
operation = tf.add(a, b, name='cons_add')
with tf.Session() as ses:
print ses.run(operation)
sub_operation = tf.subtract(a, b, name='cons_subtraction')
x = tf.constant([[-1.37 + 2.57j], [-3.37 + 5.33j]])
abs_function = tf.abs(x)
with tf.Session() as ses:
print ses.run(abs_function)
pos_tensor = tf.constant([[5],[7]])
negative_function = tf.negative(pos_tensor)
with tf.Session() as ses:
print ses.run(negative_function)
sign_tensor = tf.constant([[5]])
sign_function = tf.sign(sign_tensor)
with tf.Session() as ses:
print ses.run(sign_function)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Config Contants
Step2: in the variable "b" we are going to assign a constant with the initial value of "5"
Step3: In the following variable "operation" we will define a sum by applying "add". As a parameter we will use the constants defined above. "a" and "b"
Step4: link documentacion oficial - add
Step5: Constants - Subtraction
Step6: link documentacion oficial - subtract
Step7: official documentation
Step8: tf.negative
Step9: official documentation
Step10: tf.sign
Step11: official documentation
|
144 | <ASSISTANT_TASK:>
Python Code:
import json
series_types = ["Don't Know", "Other nonmetal", "Alkali metal",
"Alkaline earth metal", "Nobel gas", "Metalloid",
"Halogen", "Transition metal", "Post-transition metal",
"Lanthanoid", "Actinoid"]
class Element:
fields = "protons symbol long_name mass series"
repstr = ("Atom(protons={protons}, symbol='{symbol}', "
"long_name='{long_name}', "
"mass={mass}, series='{series}')")
def __init__(self, protons: int, symbol: str,
long_name: str, mass: float, series: str):
# build self.__dict__
self.protons = protons
self.symbol = symbol
self.long_name = long_name
self.__dict__['mass'] = mass # same idea
self.series = series
def __getitem__(self, idx): # simulates collection.namedtuple behavior
return self.__dict__[self.fields[idx]]
def __repr__(self):
return self.repstr.format(**self.__dict__)
Atom = Element # synonyms
import unittest
class Test_Element(unittest.TestCase):
def test_instance(self):
lithium = Atom(3, "Li", "Lithium", 6.941, "Alkali metal")
self.assertEqual(lithium.protons, 3, "Houston, we have a problem")
a = Test_Element() # the test suite
suite = unittest.TestLoader().loadTestsFromModule(a) # fancy boilerplate
unittest.TextTestRunner().run(suite) # run the test suite
class ElementEncoder(json.JSONEncoder):
See: https://docs.python.org/3.5/library/json.html
def default(self, obj):
if isinstance(obj, Element): # how to encode an Element
return [obj.protons, obj.symbol, obj.long_name, obj.mass, obj.series]
return json.JSONEncoder.default(self, obj) # just do your usual
# Element = namedtuple("Atom", "protons abbrev long_name mass")
def load_elements():
global all_elements # <--- will be visible to entire module
try:
the_file = "/Users/kurner/Documents/classroom_labs/periodic_table.json"
f = open(the_file, "r") # <--- open the_file instead
except IOError:
print("Sorry, no such file!")
else:
the_dict = json.load(f)
f.close()
all_elements = {}
for symbol, data in the_dict.items():
all_elements[symbol] = Atom(*data) # "explode" data into 5 inputs
print("File:", the_file, 'loaded.')
load_elements() # actually do it
def print_periodic_table(sortby=1):
sort all_elements by number of protons, ordered_elements local only
What about series?
Sort Order:
1. protons
2. symbol
3. series
print("Selected:", sortby)
if sortby == 1:
ordered_elements = sorted(all_elements.values(), key = lambda k: k.protons)
elif sortby == 2:
ordered_elements = sorted(all_elements.values(), key = lambda k: k.symbol)
elif sortby == 3:
ordered_elements = sorted(all_elements.values(), key = lambda k: k.series)
print("PERIODIC TABLE OF THE ELEMENTS")
print("-" * 70)
print("Symbol |Long Name |Protons |Mass |Series " )
print("-" * 70)
for the_atom in ordered_elements:
print("{:6} | {:20} | {:6} | {:5.2f} | {:15}".format(the_atom.symbol,
the_atom.long_name,
the_atom.protons,
the_atom.mass,
the_atom.series))
print_periodic_table() # do it for real
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python for STEM Teachers<br/>Oregon Curriculum Network
Step3: <div align="center">graphic by Kenneth Snelson</div>
|
145 | <ASSISTANT_TASK:>
Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Materil.zip
!rm images.zip
a_set = {1, 2, 3}
a_set
empty_set = set() # you have to use set() to create an empty set! (we will see why later)
print(empty_set)
a_set = {1, 2, 1, 1}
print(a_set)
a_set = {1, 3, 2}
print(a_set)
{1, 2, 3} == {2, 3, 1}
a_set = {1, 'a'}
print(a_set)
a_set = {1, []}
a_set = set()
a_set.add(1)
print(a_set)
a_set = set()
a_set = a_set.add(1)
print(a_set)
dir(set)
help(set.union)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_union = set1.union(set2)
print(the_union)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 6, 7, 8, 9}
the_union = set1.union(set2, set3)
print(the_union)
help(set.intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_intersection = set1.intersection(set2)
print(the_intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 8, 9, 10}
the_intersection = set1.intersection(set2, set3)
print(the_intersection)
a_set = set()
a_set.add(1)
a_set.add(2)
a_set[0]
nums = {3, 41, 12, 9, 74, 15}
print(len(nums)) # number of items in a set
print(max(nums)) # highest value in a set
print(min(nums)) # lowest value in a set
print(sum(nums)) # sum of all values in a set
set_a = {1, 2, 3}
set_b = {4, 5, 6}
an_element = 4
print(set_a)
#do some operations
set_a.add(an_element) # Add an_element to set_a
print(set_a)
set_a.update(set_b) # Add the elements of set_b to set_a
print(set_a)
set_a.pop() # Remove and return an arbitrary set element. How does this compare to the list method pop?
print(set_a)
set_a.remove(an_element) # Remove an_element from set_a
print(set_a)
dir(set)
set_1 = {'just', 'some', 'words'}
set_2 = {'some', 'other', 'words'}
# your code here
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 7 - Sets
Step2: Curly brackets surround sets, and commas separate the elements in the set
Step3: Please note that sets are unordered. This means that it can occur that if you print a set, it looks different than how you created it
Step4: This also means that you can check if two sets are the same even if you don't know the order in which items were put in
Step5: Please note that sets can only contain immutable objects. Hence the following examples will work, since we are adding immutable objects
Step6: But the following example will result in an error, since we are trying to create a set with a mutable object
Step7: 2. How to add items to a set
Step8: 3. How to extract/inspect items in a set
Step9: You observe that there are many methods defined for sets! Here we explain the two most common methods. We start with the union method.
Step10: Python shows dots (...) for the parameters of the union method. Based on the docstring, we learn that we can provide any number of sets, and Python will return the union of them.
Step11: The intersection method has works in a similar manner as the union method, but returns a new set containing only the intersection of the sets.
Step12: Since sets are unordered, you can not use an index to extract an element from a set.
Step13: 4. Using built-in functions on sets
Step14: 5. An overview of set operations
Step15: Before diving into some exercises, you may want to the dir built-in function again to see an overview of all set methods
Step16: Exercises
|
146 | <ASSISTANT_TASK:>
Python Code:
print("Exemplo 4.11")
#Superposicao
#Analise Fonte de Tensao
#Req1 = 4 + 8 + 8 = 20
#i1 = 12/20 = 3/5 A
#Analise Fonte de Corrente
#i2 = 2*4/(4 + 8 + 8) = 8/20 = 2/5 A
#in = i1 + i2 = 1A
In = 1
#Req2 = paralelo entre Req 1 e 5
#20*5/(20 + 5) = 100/25 = 4
Rn = 4
print("Corrente In:",In,"A")
print("Resistência Rn:",Rn)
print("Problema Prático 4.11")
#Analise Vs
#i1 = 15/(3 + 3) = 15/6 A
#Analise Cs
#i2 = 4*3/(3 + 3) = 2 A
#in = i1 + i2 = 15/6 + 2 = 27/6 = 4.5
In = 4.5
#Rn = 6*6/(6 + 6) = 3
Rn = 3
print("Corrente In:",In,"A")
print("Resistência Rn:",Rn)
print("Exemplo 4.12")
#Aplica-se tensao Vo = 1V entre os terminais a-b
#Assim Rth = Rn = 5
Rn = 5
#Analise Nodal
#ix = 10/4 = 2.5 A
#i1 = 10/5 = 2 A
#in = 2ix + i1 = 5 + 2 = 7 A
In = 7
print("Corrente In:",In,"A")
print("Resistência Rn:",Rn)
print("Problema Prático 4.12")
#Aplica-se Vo = 2V entre os terminais a-b
#Assim Vx = 2V
#Analise Nodal
#V1 = tensao sobre resistor 6 = 3Vx = 6V
#i1 = 1 A
#i2 = 2/2 = 1 A
#io = i1 + i2 = 2 A
#Rn = Vo/io = 2/2 = 1
Rn = 1
#In = 10 = corrente de curto circuito
In = 10
print("Resistência Rn:",Rn)
print("Corrente In:",In,"A")
print("Exemplo 4.13")
#Req1 = 6*12/(6 + 12) = 4
#Rn = 4 + 3 + 2 = 9
Rn = 9
#Superposicao
#Fonte de Corrente
#i1 = 2*7/(7 + 2) = 14/9
#Fonte de Tensao
#Req2 = 12*5/(12 + 5) = 60/17
#Req3 = 6 + 60/17 = 162/17
#it = 12/(162/17) = 12*17/162
#i2 = it*12/(12 + 5) = 8/9
#in = i1 + i2 = 14/9 + 8/9 = 22/9
In = 22/9
P = (Rn/4)*In**2
print("Corrente In:",In,"A")
print("Potência Máxima Transferida:",P,"W")
print("Problema Prático 4.13")
import numpy as np
#Analise In
#vx = 2i1
#vx + (i1 - i2) + 3vx = 9
#i1 - i2 + 4vx = 9
#9i1 - i2 = 9
#(i2 - i1) + 4i2 = 3vx
#-i1 + 5i2 = 6i1
#-7i1 + 5i2 = 0
coef = np.matrix("9 -1;-7 5")
res = np.matrix("9;0")
I = np.linalg.inv(coef)*res
In = -I[1]
#Analise Rn
#io = 1 A
#vx = 2i1
#vx + (i1 + io) + 3vx = 0
#i1 + 4vx = -1
#i1 + 8i1 = -1
#i1 = -1/9
#vx = -2/9
#Vab = 4io + (io + i1) + 3vx
#Vab = 4 + 1 -1/9 -6/9 = 38/9
#Rn = Vab/io = 38/9
Rn = 38/9
P = (Rn/4) * In**2
print("Resistencia Rl para potência Maxima:",Rn)
print("Potencia maxima:",float(P),"W")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problema Prático 4.11
Step2: Exemplo 4.12
Step3: Problema Prático 4.12
Step4: Máxima Transferência de Potência
Step5: Problema Prático 4.13
|
147 | <ASSISTANT_TASK:>
Python Code:
# load json twitter data
twitter_json = r'data/twitter_01_20_17_to_3-2-18.json'
# Convert to pandas dataframe
tweet_data = pd.read_json(twitter_json)
# read the json data into a pandas dataframe
tweet_data = pd.read_json(twitter_json)
# set column 'created_at' to the index
tweet_data.set_index('created_at', drop=True, inplace= True)
# convert timestamp index to a datetime index
pd.to_datetime(tweet_data.index)
# function to identify hash tags
def hash_tag(text):
return re.findall(r'(#[^\s]+)', text)
# function to identify @mentions
def at_tag(text):
return re.findall(r'(@[A-Za-z_]+)[^s]', text)
# tokenize all the tweet's text
tweet_data['text_tokenized'] = tweet_data['text'].apply(lambda x: word_tokenize(x.lower()))
# apply hash tag function to text column
tweet_data['hash_tags'] = tweet_data['text'].apply(lambda x: hash_tag(x))
# apply at_tag function to text column
tweet_data['@_tags'] = tweet_data['text'].apply(lambda x: at_tag(x))
# pickle data
tweet_pickle_path = r'data/twitter_01_20_17_to_3-2-18.pickle'
tweet_data.to_pickle(tweet_pickle_path)
# Define the 2017 and 2018 url that contains all of the Executive Office of the President's published documents
executive_office_url_2017 = r'https://www.federalregister.gov/index/2017/executive-office-of-the-president'
executive_office_url_2018 = r'https://www.federalregister.gov/index/2018/executive-office-of-the-president'
# scrape all urls for pdf documents published in 2017 and 2018 by the U.S.A. Executive Office
pdf_urls= []
for url in [executive_office_url_2017,executive_office_url_2018]:
response = requests.get(url)
pattern = re.compile(r'https:.*\.pdf')
pdfs = re.findall(pattern, response.text)
pdf_urls.append(pdfs)
# writes all of the pdfs to the data folder
start = 'data/'
end = '.pdf'
num = 0
for i in range(0,(len(pdf_urls))):
for url in pdf_urls[i]:
ver = str(num)
pdf_path = start + ver + end
r = requests.get(url)
file = open(pdf_path, 'wb')
file.write(r.content)
file.close()
num = num + 1
# function to convert pdf to text from stack overflow (https://stackoverflow.com/questions/26494211/extracting-text-from-a-pdf-file-using-pdfminer-in-python/44476759#44476759)
def convert_pdf_to_txt(path):
rsrcmgr = PDFResourceManager()
retstr = io.StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = open(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos = set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages,
password=password,
caching=caching,
check_extractable=True):
interpreter.process_page(page)
text = retstr.getvalue()
fp.close()
device.close()
retstr.close()
return text
# finds the first time the name of a day appears in the txt, and returns that name
def find_day(word_generator):
day_list = ['Monday,', 'Tuesday,', 'Wednesday,', 'Thursday,', 'Friday,', 'Saturday,', 'Sunday,']
day_name_dict = {'Mon':'Monday,', 'Tue':'Tuesday,','Wed':'Wednesday,','Thu':'Thursday,','Fri':'Friday,','Sat':'Saturday,','Sun':'Sunday,'}
day_name = []
for val in word_generator:
if val in day_list:
num_position = txt.index(val)
day_name.append(txt[num_position] + txt[num_position + 1] + txt[num_position +2])
break
return day_name_dict[day_name[0]]
# takes text and returns the first date in the document
def extract_date(txt):
word_generator = (word for word in txt.split())
day_name = find_day(word_generator)
txt_start = int(txt.index(day_name))
txt_end = txt_start + 40
date_txt = txt[txt_start:txt_end].replace('\n','')
cleaned_txt = re.findall('.* \d{4}', date_txt)
date_list = cleaned_txt[0].split()
clean_date_list = map(lambda x:x.strip(","), date_list)
clean_date_string = ", ".join(clean_date_list)
date_obj = datetime.strptime(clean_date_string, '%A, %B, %d, %Y')
return date_obj
start_path = r'data/'
end_path = '.pdf'
data_dict = defaultdict(list)
for i in range(0,270):
file_path = start_path + str(i) + end_path
txt = convert_pdf_to_txt(file_path)
date_obj = extract_date(txt)
data_dict[date_obj].append(txt)
tuple_lst = []
for k, v in data_dict.items():
if v != None:
for text in v:
tuple_lst.append((k, text))
# create dataframe from list of tuples
fed_reg_dataframe = pd.DataFrame.from_records(tuple_lst, columns=['date','str_text'], index = 'date')
# tokenize all the pdf text
fed_reg_dataframe['token_text'] = fed_reg_dataframe['str_text'].apply(lambda x: word_tokenize(x.lower()))
# final dataframe
fed_reg_dataframe[fed_reg_dataframe.index > '2017-01-20']
# pickle final data
fed_reg_data = r'data/fed_reg_data.pickle'
final_df.to_pickle(fed_reg_data)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using Pandas I will read the twitter json file, convert it to a dataframe, set the index to 'created at' as datetime objects, then write it to a csv
Step2: The next step is to add columns with tokenized text and identify twitter specific puncutiations like hashtags and @ mentions
Step3: Scrape Data from the Federal Register
Step4: Create dataframe with the date the pdf was published and the text of each pdf
Step5: Create a dictionary using DefaultDict where the date of publication is the key, and the text of the pdf is the value.
Step6: Create a list of tuples, where the date is the first entry and the text of a pdf is the second entry, skipping over any values of None
Step7: Pickle the dataframe, so that you only need to process the text once
|
148 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
data = pd.read_csv('https://s3-ap-southeast-1.amazonaws.com/intro-to-ml-minhdh/EcommercePurchases.csv')
data.head()
data.shape
data["Purchase Price"].mean()
data["Purchase Price"].max()
data["Purchase Price"].min()
data[data['Language'] == 'en'].count()[0]
data[data['Job'] == 'Lawyer'].count()[0]
data['AM or PM'].value_counts()
data['Job'].value_counts().head()
data['Purchase Price'][data['Lot'] == '90 WT']
data['Email'][data['Credit Card'] == 4926535242672853]
data2 = data[data['Purchase Price'] > 95]
data2[data2['CC Provider'] == 'American Express'].count()[0]
data[data['CC Exp Date'].str.contains('/25')].shape[0]
data[data['Email'].split('@')]
sns.distplot(data['Purchase Price'])
sns.countplot(data['Language'])
Feel free to plot more graphs to dive deeper into the dataset.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check the head of the DataFrame.
Step2: How many rows and columns are there?
Step3: What is the average Purchase Price?
Step4: What were the highest and lowest purchase prices?
Step5: How many people have English 'en' as their Language of choice on the website?
Step6: How many people have the job title of "Lawyer" ?
Step7: How many people made the purchase during the AM and how many people made the purchase during PM ?
Step8: What are the 5 most common Job Titles?
Step9: Someone made a purchase that came from Lot
Step10: What is the email of the person with the following Credit Card Number
Step11: How many people have American Express as their Credit Card Provider and made a purchase above $95 ?
Step12: Hard
Step13: Hard
Step14: Data Visualization
Step15: Implement countplot on Language
|
149 | <ASSISTANT_TASK:>
Python Code:
import toytree
import itertools
import numpy as np
t0 = toytree.rtree.unittree(10, seed=0)
t1 = toytree.rtree.unittree(10, seed=1)
toytree.mtree([t0, t1]).draw(ts='p', height=200);
t0.draw(
ts='p',
node_colors="lightgrey",
edge_widths=3,
edge_colors=t0.get_edge_values_mapped(
{11: 'red', 3: 'pink', 4: 'blue', 18: 'aqua', 12: 'black'},
),
);
# focal node
nidx = 12
# get all tips as a set
fullset = set(i for i in t0.get_tip_labels())
# get tips from each child of a given node
down0 = set(t0.idx_dict[nidx].children[0].get_leaf_names())
down1 = set(t0.idx_dict[nidx].children[1].get_leaf_names())
up0 = set(t0.idx_dict[nidx].up.get_leaf_names()) - down0 - down1
up1 = fullset - down0 - down1 - up0
print(down0)
print(down1)
print(up0)
print(up1)
set(itertools.product(down0, down1, up0, up1))
def get_quartets(ttre):
# store all quartets in this SET
qset = set([])
# get a SET with all tips in the tree
fullset = set(ttre.get_tip_labels())
# get a SET of the descendants from each internal node
for node in ttre.idx_dict.values():
# skip leaf nodes
if not node.is_leaf():
children = set(node.get_leaf_names())
prod = itertools.product(
itertools.combinations(children, 2),
itertools.combinations(fullset - children, 2),
)
quartets = set([tuple(itertools.chain(*i)) for i in prod])
qset = qset.union(quartets)
# order tups in sets
sorted_set = set()
for qs in qset:
if np.argmin(qs) > 1:
tup = tuple(sorted(qs[2:]) + sorted(qs[:2]))
sorted_set.add(tup)
else:
tup = tuple(sorted(qs[:2]) + sorted(qs[2:]))
sorted_set.add(tup)
return sorted_set
get_quartets(t1)
q0 = get_quartets(t0)
q1 = get_quartets(t1)
# quartets that are in one tree but not the other
q0.symmetric_difference(q1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: get two random trees
Step2: Plan for counting quartets (Illustrated below)
Step3: Example to sample tips from each quartet edge
Step4: Example to get all quartet sets from sampled tips
Step5: Combine into a function
Step6: Compare quartet sets
|
150 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd?
pd.Categorical
cdr = pd.read_csv('data/CDR_data.csv')
cdr.head()
cdr.info()
len(cdr)
cdr.CallTimestamp = pd.to_datetime(cdr.CallTimestamp)
cdr.Duration = pd.to_timedelta(cdr.Duration)
cdr.info()
cdr.Duration.mean()
phone_owners = pd.read_excel("data/phoneowners.xlsx")
phone_owners.head()
cdr_names = pd.merge(cdr, phone_owners, left_on='In', right_on='number')
cdr_names[['In', 'number', 'name']].head()
ist_john_doe = cdr_names.name == "John Doe"
ist_john_doe.head()
john_doe = cdr_names[ist_john_doe]
john_doe.head()
towers = pd.read_csv("data/towers.csv", index_col=0)
towers.head()
john_doe_towers = john_doe.join(towers, on='TowerID')
john_doe_towers.head()
import matplotlib.pyplot as plt
import matplotlib
# sagt Jupyter, dass die Plotausgabe direkt im Notebook passieren soll
%matplotlib inline
# macht die Plots schöner
matplotlib.style.use('ggplot')
ax = john_doe_towers.plot.scatter(
x='TowerLon',
y='TowerLat',
alpha=0.1,
title='Anruforte',
s = 100)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 2)
data = pd.concat([john_doe_towers.TowerLon, john_doe_towers.TowerLat], axis = 1)
labels = kmeans.fit_predict(data)
centroids = kmeans.cluster_centers_
ax.scatter(x = centroids[:, 0], y = centroids[:, 1], c = 'r', marker = 'x', s = 100)
ax.figure
#showandtell() # TODO: Comment this line out when you're ready to proceed
centroids
joined['CallTimestamp'] = pd.to_datetime(joined['CallTimestamp'])
weekdays = joined['CallTimestamp'].dt.dayofweek.isin(Workweek) & joined['CallTimestamp'].dt.hour.isin(range(8,18))
dfweekdays = joined[weekdays]
dfweekdays.head()
targetname = 'John Doe'
user1 = joined[joined['Name'] == targetname]
#user1 = user1[weekdays]
user1 = user1.reset_index(drop=True)
user1.head()
#
# INFO: The locations map above should be too "busy" to really wrap your head around. This is where domain expertise comes into play.
# Your intuition tells you that people are likely to behave differently on weekends:
#
# On Weekends:
# 1. People probably don't go into work
# 2. They probably sleep in late on Saturday
# 3. They probably run a bunch of random errands, since they couldn't during the week
# 4. They should be home, at least during the very late hours, e.g. 1-4 AM
#
# On Weekdays:
# 1. People probably are at work during normal working hours
# 2. They probably are at home in the early morning and during the late night
# 3. They probably spend time commuting between work and home everyday
#
# TODO: Add more filters to the user1 slice you created. Add bitwise logic so that you're only examining records that came in on
# weekends (sat/sun).
#
# .. your code here ..
user1['DOW'] = user1.CallTimestamp.dt.strftime("%a")
user1 = user1[(user1.DOW == 'Sat') | (user1.DOW == 'Sun')]
user1.head()
#
# TODO: Further filter it down for calls that are came in either before 6AM OR after 10pm (22:00:00). You can use < and > to compare
# the string times, just make sure you code them as military time strings, eg: "06:00:00", "22:00:00":
# https://en.wikipedia.org/wiki/24-hour_clock
#
# You might also want to review the Data Manipulation section for this. Once you have your filtered slice, print out its length:
#
# .. your code here ..
user1 = user1[(user1.CallTimestamp < "06:00:00") | (user1.CallTimestamp > "22:00:00")]
user1.head()
#
# INFO: Visualize the dataframe with a scatter plot as a sanity check. Since you're familiar with maps, you know well that your
# X-Coordinate should be Longitude, and your Y coordinate should be the tower Latitude. Check the dataset headers for proper column
# feature names.
# https://en.wikipedia.org/wiki/Geographic_coordinate_system#Geographic_latitude_and_longitude
#
# At this point, you don't yet know exactly where the user is located just based off the cell phone tower position data; but
# considering the below are for Calls that arrived in the twilight hours of weekends, it's likely that wherever they are bunched up
# is probably near the caller's residence:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(user1.TowerLon,user1.TowerLat, c='g', marker='o', alpha=0.2)
ax.set_title('Weekend Calls (<6am or >10p)')
#showandtell() # TODO: Comment this line out when you're ready to proceed
#
# TODO: Run K-Means with a K=1. There really should only be a single area of concentration. If you notice multiple areas that are
# "hot" (multiple areas the usr spends a lot of time at that are FAR apart from one another), then increase K=2, with the goal being
# that one of the centroids will sweep up the annoying outliers; and the other will zero in on the user's approximate home location.
# Or rather the location of the cell tower closest to their home.....
#
# Be sure to only feed in Lat and Lon coordinates to the KMeans algo, since none of the other data is suitable for your purposes.
# Since both Lat and Lon are (approximately) on the same scale, no feature scaling is required. Print out the centroid locations and
# add them onto your scatter plot. Use a distinguishable marker and color.
#
# Hint: Make sure you graph the CORRECT coordinates. This is part of your domain expertise.
#
# .. your code here ..
#coordinates = "" + centroids[0][1].to_string().split('.')[0] + "°" + centroids[0][1].split('.')[1][0] + centroids[0][1].split('.')[1][1] + centroids[0][0]
#str.split(' ', 1 )
centroids
difference1 = centroids[0][1] - centroids[1][1]
difference2 = centroids[0][0] - centroids[1][0]
difference1 = 0.5 * difference1
difference2 = 0.5 * difference2
coordinate1 = centroids[0][1] + difference1
coordinate2 = centroids[0][0] + difference2
coordinates = str(coordinate1) + " " + str(coordinate2)
coordinates
#-96°90'92.4672"N 96°56'57.3"W
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interaktive Hilfe
Step2: Die weitere Funktionalität der Pandas-Bibliothek können wir erkunden, indem wir die Methoden von Pandas ansehen. Dazu verwenden wir pd. und nutzen die integrierte Autovervollständigung von Jupyter mittels der Tabulatortaste Tab, um zu sehen, welche Methoden uns Pandas bietet. Gehen wir dann mit der Pfeiltaste unten z. B. auf Categorical, drücken Enter und schließend Shift+ Tab, dann erscheint die Signatur des entsprechenden Funktionalität und der Ausschnitt der Hilfedokumentation. Bei zweimaligem Drücken von Shift + Tab erscheint die Hilfe vollständig.
Step3: Laden von Daten
Step4: Als nächstes sehen wir uns mit der info()-Methode an, was uns nun Pandas in die cdr-Variable geschrieben hat.
Step5: DataFrame
Step6: Zudem sehen wir Informationen über die Datentypen in dem Dataframe. Die beiden ersten Spalten In und Out sind vom Typ int64, also Ganzzahlen, welche als 64-Bit gespeichert werden (können also sehr große Zahlen werden). Die vier anderen Spalten sind vom Typ object, was in unserem Fall soviel bedeutet, dass wir hier ersteinmal reine Zeichenketten bzw. Texte vorliegen haben.
Step7: Nun können wir auf der Spalte Duration den Durchschnitt mit mean() berechnen.
Step8: Wir sehen, dass die durchschnittliche Anrufdauer bei etwa sechs Minuten liegt.
Step9: Die Spalten dieser Datei wollen wir nun an den obigen cdr-DataFrame mit anhängen. Hierzu verwenden wir die Funktion pd.merge, welche es uns erlaubt, beliebige Spalten aus unterschiedlichen DataFrames zusammenzuführen.
Step10: Da wir nun wissen, welche Telefonnummer John Doe gehört, können wir die entsprechenden Datensätze herausfinden und alle anderen Datensätze ignorieren. Dazu selektieren wir nun mit Hilfe der Selektor-Schreibweise die entsprechenden Datensätze.
Step11: Die obige Hilfevariable können wir nun in den Selektor schreiben und erhalten damit einen DataFrame, welcher nur die Anrufe von John Doe enthält. Das Ergebnis schreiben wir in die Variable john_doe.
Step12: Im nächsten Schritt interessiert uns, wo John Doe überall telefoniert. Wir wissen dank der TowerId, an welchem Mobilfunkmasten John Doe eingeloggt war, als das Telefonat stattgefunden hatte. In der Datei data/towers.csb haben wir zudem die Informationen, an welcher geografischen Koordinate sich ein Mobilfunkmast befindet. Diese Angaben sind pro TowerId als Breitengrad und Höhengrad abgelegt. Zuerst laden wir die entsprechende CSV-Datei mittels read_csv(). Wir geben zusätzlich mittels index_col=0 mit an, dass wir die erste Spalte im DataFrame als Index verwenden wollen.
Step13: Zum Zusammenführen der Daten mit unseren bestehenden CDR-DataFrame können wir diesesmal die join()-Methode verwenden, da wir im towers DataFrame einen Index auf die TowerID-Spalte angelegt haben. Die join()-Methode kann nun auf dieser Basis die Daten zusammenführen. Dazu müssen wir noch angeben, welche Spalte im john_doe-DataFrame die Index-Spalte im anderen DataFrame repräsentiert. Das Ergebnis speichern wir in der Variable john_doe_towers.
Step14: Nun können wir irgendwie nicht mehr erwarten zu sehen, wo die Telefonate durchgeführt wurden. Dazu plotten wir mit der Visualisierungsbibliothek "matplotlib" die entsprechenden Koordinaten der Mobilfunkmasten.
|
151 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
help(tf.layers.dense)
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
img_dim = 784
inputs_ = tf.placeholder(tf.float32, shape=[None, img_dim], name='inputs')
targets_ = tf.placeholder(tf.float32, shape=[None, img_dim], name='inputs')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, img_dim)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer().minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
152 | <ASSISTANT_TASK:>
Python Code:
import datashader as ds
import datashader.transfer_functions as tf
import dask.dataframe as dd
import numpy as np
%%time
#df = dd.from_castra('data/census.castra')
df = dd.read_hdf('data/census.hdf', key='census')
#df = df.cache(cache=dict)
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning, message='.*use @default decorator instead.*')
df.tail()
USA = ((-13884029, -7453304), (2698291, 6455972))
LakeMichigan = ((-10206131, -9348029), (4975642, 5477059))
Chicago = (( -9828281, -9717659), (5096658, 5161298))
Chinatown = (( -9759210, -9754583), (5137122, 5139825))
NewYorkCity = (( -8280656, -8175066), (4940514, 4998954))
LosAngeles = ((-13195052, -13114944), (3979242, 4023720))
Houston = ((-10692703, -10539441), (3432521, 3517616))
Austin = ((-10898752, -10855820), (3525750, 3550837))
NewOrleans = ((-10059963, -10006348), (3480787, 3510555))
Atlanta = (( -9448349, -9354773), (3955797, 4007753))
x_range,y_range = USA
plot_width = int(1000)
plot_height = int(plot_width*7.0/12)
black_background = True
from IPython.core.display import HTML, display
display(HTML("<style>.container { width:100% !important; }</style>"))
def export(img,filename,fmt=".png",_return=True):
Given a datashader Image object, saves it to a disk file in the requested format
if black_background: # Optional; removes transparency to force background for exported images
img=tf.set_background(img,"black")
img.to_pil().save(filename+fmt)
return img if _return else None
def cm(base_colormap, start=0, end=1.0, reverse=not black_background):
Given a colormap in the form of a list, such as a Bokeh palette,
return a version of the colormap reversed if requested, and selecting
a subset (on a scale 0,1.0) of the elements in the colormap list.
For instance:
>>> cmap = ["#000000", "#969696", "#d9d9d9", "#ffffff"]
>>> cm(cmap,reverse=True)
['#ffffff', '#d9d9d9', '#969696', '#000000']
>>> cm(cmap,0.3,reverse=True)
['#d9d9d9', '#969696', '#000000']
full = list(reversed(base_colormap)) if reverse else base_colormap
num = len(full)
return full[int(start*num):int(end*num)]
from datashader.colors import Greys9, Hot, viridis, inferno
%%time
cvs = ds.Canvas(plot_width, plot_height, *USA)
agg = cvs.points(df, 'meterswest', 'metersnorth')
export(tf.interpolate(agg, cmap = cm(Greys9), how='linear'),"census_gray_linear")
export(tf.interpolate(agg, cmap = cm(Greys9,0.25), how='linear'),"census_gray_linear")
export(tf.interpolate(agg, cmap = cm(Greys9,0.2), how='log'),"census_gray_log")
export(tf.interpolate(agg, cmap = cm(Greys9,0.2), how='eq_hist'),"census_gray_eq_hist")
print(cm(Hot,0.2))
export(tf.interpolate(agg, cmap = cm(Hot,0.2), how='eq_hist'),"census_ds_hot_eq_hist")
from bokeh.palettes import PuRd9
export(tf.interpolate(agg, cmap=cm(PuRd9), how='eq_hist'),"census_inferno_eq_hist")
export(tf.interpolate(agg, cmap=cm(viridis), how='eq_hist'),"census_viridis_eq_hist.png")
grays2 = cm([(i,i,i) for i in np.linspace(0,255,99)])
grays2 += ["red"]
export(tf.interpolate(agg, cmap = grays2, how='eq_hist'),"census_gray_redhot1_eq_hist")
if black_background:
color_key = {'w':'aqua', 'b':'lime', 'a':'red', 'h':'fuchsia', 'o':'yellow' }
else: color_key = {'w':'blue', 'b':'green', 'a':'red', 'h':'orange', 'o':'saddlebrown'}
def create_image(x_range, y_range, w=plot_width, h=plot_height):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'meterswest', 'metersnorth', ds.count_cat('race'))
img = tf.colorize(agg, color_key, how='eq_hist')
return img
export(create_image(*USA),"Zoom 0 - USA")
export(create_image(*LakeMichigan),"Zoom 1 - Lake Michigan")
export(create_image(*Chicago),"Zoom 2 - Chicago")
export(tf.spread(create_image(*Chinatown),px=int(plot_width/400)),"Zoom 3 - Chinatown")
mask = np.array([[1, 1, 1, 1, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 1, 1, 1, 1]])
export(tf.spread(create_image(*Chinatown), mask=mask),"Chinatown outlines")
export(create_image(*NewYorkCity),"NYC")
export(create_image(*LosAngeles),"LosAngeles")
export(create_image(*Houston),"Houston")
export(create_image(*Atlanta),"Atlanta")
export(create_image(*NewOrleans),"NewOrleans")
export(create_image(*Austin),"Austin")
cvs = ds.Canvas(plot_width=plot_width, plot_height=plot_height)
agg = cvs.points(df, 'meterswest', 'metersnorth', ds.count_cat('race'))
export(tf.interpolate(agg.sel(race='b'), cmap=cm(Greys9,0.25), how='eq_hist'),"USA blacks")
agg2 = agg.where((agg.sel(race=['w', 'b', 'a', 'h']) > 0).all(dim='race')).fillna(0)
export(tf.colorize(agg2, color_key, how='eq_hist'),"USA all")
export(tf.colorize(agg.where(agg.sel(race='w') < agg.sel(race='b')).fillna(0), color_key, how='eq_hist'),"more_blacks")
x_range
import bokeh.plotting as bp
from bokeh.models.tiles import WMTSTileSource
bp.output_notebook()
def base_plot(tools='pan,wheel_zoom,reset',webgl=False):
p = bp.figure(tools=tools,
plot_width=int(900*1.5), plot_height=int(500*1.5),
x_range=x_range, y_range=y_range, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0, webgl=webgl)
p.axis.visible = False
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.responsive = True
return p
from datashader.callbacks import InteractiveImage
def image_callback(x_range, y_range, w, h):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'meterswest', 'metersnorth', ds.count_cat('race'))
img = tf.colorize(agg, color_key, 'log')
return tf.dynspread(img,threshold=0.75, max_px=8)
p = base_plot()
url="http://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.png"
#url="http://tile.stamen.com/toner-background/{Z}/{X}/{Y}.png"
tile_renderer = p.add_tile(WMTSTileSource(url=url))
tile_renderer.alpha=1.0 if black_background else 0.15
InteractiveImage(p, image_callback, throttle=1000)
def image_callback2(x_range, y_range, w, h):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'meterswest', 'metersnorth')
img = tf.interpolate(agg, cmap = list(reversed(Greys9)))
return tf.dynspread(img,threshold=0.75, max_px=8)
p = base_plot()
#InteractiveImage(p, image_callback2, throttle=1000)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: The output of .tail() shows that there are more than 300 million datapoints (one per person), each with a location in Web Mercator format, and that the race for each datapoint has been encoded as a single character (where 'w' is white, 'b' is black, 'a' is Asian, 'h' is Hispanic, and 'o' is other (typically Native American)).
Step3: Let's also choose a background color for our results. A black background makes bright colors more vivid, and works well when later adding relatively dark satellite image backgrounds, but white backgrounds are good for examining the weakest patterns, and work well when adding vector-based maps with light colors. Try it both ways and decide for yourself!
Step6: We'll also need some utility functions
Step7: Population density
Step8: Computing this aggregate grid will take some CPU power (5 seconds on a MacBook Pro), because datashader has to iterate through the entire dataset, with hundreds of millions of points. Once the agg array has been computed, subsequent processing will now be nearly instantaneous, because there are far fewer pixels on a screen than points in the original database.
Step9: ...almost nothing. If you know what to look for, you can see hotspots (high population densities) in New York City, Los Angeles, Chicago, and a few other places. More hotspots can dimly be seen when using a white background than with a black, on most monitors, though they are very dim either way. In any case, for feeding 300 million points in, we're getting almost nothing back in terms of visualization.
Step10: The above plot reveals at least that data has been measured only within the political boundaries of the continental United States, and also that many areas in the West are so poorly populated that many pixels contained not even a single person. (In datashader images, the background color is shown for pixels that have no data at all, using the alpha channel of a PNG image, while the colormap range is shown for pixels that do have data.) Some additional population centers are now visible, at least on some monitors. But mainly what the above plot indicates is that population in the USA is extremely non-uniformly distributed, with hotspots in a few regions, and nearly all other pixels having much, much lower (but nonzero) values. Again, that's not much information to be getting out out of 300 million datapoints!
Step11: Suddenly, we can see an amazing amount of structure! There are clearly meaningful patterns at nearly every location, ranging from the geographic variations in the mountainous West, to the densely spaced urban centers in New England, and the many towns stretched out along roadsides in the midwest (especially those leading to Denver, the hot spot towards the right of the Rocky Mountains).
Step12: (Histogram equalization also works for non-integer data, but in that case it will use a finite set of bins to divide the interval between the minimum and maximum values, and will thus not be able to normalize the histogram perfectly for highly non-uniform distributions.) Effectively, this transformation converts the data from raw magnitudes, which can easily span a much greater range than the dynamic range visible to the eye, to a rank-order or percentile representation, which reveals density differences at all ranges but obscures the absolute magnitudes involved. In this representation, you can clearly see the effects of geography (rivers, coastlines, and mountains) on the population density, as well as history (denser near the longest-populated areas), and even infrastructure (with many small towns located at crossroads).
Step13: You can also import colormaps directly from matplotlib.cm or bokeh.palettes, though only Bokeh palettes will work with the cm() function that lets us switch backgrounds
Step14: So that they will work with cm, we provide Matplotlib's default viridis and inferno colormaps from within datashader
Step15: The above colormap choices are largely a matter of personal preference, though some of them are more perceptually uniform (accurately conveying distance between data values for all colors) and some have higher dynamic ranges than others (allowing more precise differences between data values to be distinguished).
Step16: The above plot now conveys nearly all the information available in the linear plot, i.e. that only a few pixels have the very highest population densities, while also conveying the structure of the data at all population density ranges via histogram equalization.
Step17: We can now aggregate the counts per race into grids, using ds.count_cat, instead of just a single grid with the total counts (via the default aggregate reducer ds.count), and then generate an image by colorizing each pixel using the aggregate information from each category for that pixel's location
Step18: The result shows that the USA is overwhelmingly white, apart from some predominantly Hispanic regions along the Southern border, some regions with high densities of blacks in the Southeast, and a few isolated areas of category "Other" in the West (primarily Native American reservation areas).
Step19: Interestingly, the racial makeup has some sharp boundaries around urban centers, as we can see if we zoom in
Step20: With sufficient zoom, it becomes clear that Chicago (like most large US cities) has both a wide diversity of racial groups, and profound geographic segregation
Step21: Eventually, we can zoom in far enough to see individual datapoints. Here we can see that the Chinatown region of Chicago has, as expected, very high numbers of Asian residents, and that other nearby regions (separated by features like roads and highways) have other races, varying in how uniformly segregated they are
Step22: Note that we've used the tf.spread function to enlarge each point to cover multiple pixels so that each point is clearly visible. Instead of the default circular spreading, you could choose shape='square' if you prefer, or any mask shape, e.g.
Step23: Other cities, for comparison
Step24: Analyzing racial data through visualization
Step25: (Compare to "census_gray_eq_hist" above.)
Step26: In the above plot, the colors still show the racial makeup of each pixel, but the pixels have been filtered so that only those with at least one datapoint from every race are shown.
Step27: Here the colors still show the predominant race in that pixel, which is black for many of these, but in Southern California it looks like there are several large neighborhoods where blacks outnumber whites but both are outnumbered by Hispanics.
Step28: And then create an interactive plot using a callback to a datashader pipeline. In this pipeline, we'll use the tf.dynspread function to automatically increase the plotted size of each datapoint, once you've zoomed in so far that datapoints no longer have nearby neighbors. We'll also add some image tiles in the background, using satellite information by default (or you can uncomment the Stamen tiles line below to use vector-based map data instead)
Step29: Note that you need a live copy of the notebook, with a running server; zooming and panning will be disabled (or only work partially, which can be confusing!) in a static exported copy.
|
153 | <ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger('error')
b = phoebe.default_binary()
b.set_value('ecc', 0.2)
b.set_value('per0', 25)
b.set_value('teff@primary', 7000)
b.set_value('teff@secondary', 6000)
b.set_value('sma@binary', 7)
b.set_value('incl@binary', 80)
b.set_value('q', 0.3)
b.set_value('t0_supconj', 0.1)
b.set_value('requiv@primary', 2.0)
b.set_value('vgamma', 80)
lctimes = phoebe.linspace(0, 10, 1005)
rvtimes = phoebe.linspace(0, 10, 105)
b.add_dataset('lc', compute_times=lctimes)
b.add_dataset('rv', compute_times=rvtimes)
b.add_compute('ellc', compute='fastcompute')
b.set_value_all('ld_mode', 'lookup')
b.run_compute(compute='fastcompute')
fluxes = b.get_value('fluxes@model') + np.random.normal(size=lctimes.shape) * 0.01
fsigmas = np.ones_like(lctimes) * 0.02
rvsA = b.get_value('rvs@primary@model') + np.random.normal(size=rvtimes.shape) * 10
rvsB = b.get_value('rvs@secondary@model') + np.random.normal(size=rvtimes.shape) * 10
rvsigmas = np.ones_like(rvtimes) * 20
b = phoebe.default_binary()
b.add_dataset('lc',
compute_phases=phoebe.linspace(0,1,201),
times=lctimes,
fluxes=fluxes,
sigmas=fsigmas,
dataset='lc01')
b.add_dataset('rv',
compute_phases=phoebe.linspace(0,1,201),
times=rvtimes,
rvs={'primary': rvsA, 'secondary': rvsB},
sigmas=rvsigmas,
dataset='rv01')
b.add_compute('ellc', compute='fastcompute')
b.set_value_all('ld_mode', 'lookup')
b.set_value('sma@binary', 7+0.5)
b.set_value('incl@binary', 80+10)
b.set_value('q', 0.3+0.1)
b.set_value('t0_supconj', 0.1)
b.set_value('requiv@primary', 2.0-0.3)
b.run_compute(compute='fastcompute', model='orig_model')
_ = b.plot(x='phases', show=True)
b.add_solver('optimizer.nelder_mead', solver='nm_solver')
print(b.filter(solver='nm_solver'))
b.set_value('compute', solver='nm_solver', value='fastcompute')
b.set_value('maxiter', 1000)
b.set_value('maxfev', 1000)
print(b.get_parameter('fit_parameters'))
print(b.get_parameter('initial_values'))
b.set_value('fit_parameters', ['teff', 'requiv'])
b.get_value('fit_parameters', expand=True)
b.set_value('fit_parameters', ['q', 'vgamma', 't0_supconj'])
b.get_value('fit_parameters', expand=True)
print(b.filter(qualifier='enabled', compute='fastcompute'))
b.disable_dataset('lc01', compute='fastcompute')
print(b.filter(qualifier='enabled', compute='fastcompute'))
b.run_solver('nm_solver', solution='nm_sol')
print(b.filter(solution='nm_sol'))
print(b.adopt_solution(trial_run=True))
print(b.adopt_solution(trial_run=True, adopt_parameters=['q']))
b.run_compute(compute='fastcompute', sample_from='nm_sol', model='nm_model')
_ = b.plot(kind='rv', x='phases',
linestyle={'model': 'solid'},
color={'nm_model': 'red', 'model_orig': 'green'},
show=True)
b.run_compute(compute='phoebe01', sample_from='nm_sol', model='nm_model_phoebe')
_ = b.plot(model='nm_model*', kind='rv', x='phases',
linestyle={'model': 'solid'},
color={'nm_model': 'red', 'nm_model_phoebe': 'blue'},
show=True)
b.adopt_solution('nm_sol')
print(b.filter(qualifier=['q', 'vgamma', 't0_supconj'], context=['component', 'system']))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create fake "observations"
Step2: We'll set the initial model to be close to the correct values, but not exact. In practice, we would instead have to use a combination of manual tweaking, LC estimators, and RV_estimators to get in the rough parameter space of the solution before starting to use optimizers.
Step3: Nelder-Mead Options
Step4: This adds new parameters to our bundle with the options for running nelder-mead.
Step5: Here we get to choose which set of compute-options will be used while optimizing (we'll choose the ellc compute options we called 'fastcompute' just for the sake of efficiency)
Step6: We'll also set maxiter and maxfev to smaller values (these are just passed directly to scipy.optimize.minimize.
Step7: The fit_parameters parameter takes a list of twigs (see the general concepts tutorial for a refresher). These parameters will be those that are optimized. By default, each parameter will start at its current face-value. To change the starting positions, you can either change the face-values in the bundle, or pass alternate starting positions to initial_values (as a dictionary of twig-value pairs).
Step8: fit_parameters does accept partial-twigs as well as wildcard matching. To see the full list of matched parameters for the current set value, you can pass expand=True to get_value.
Step9: For this example, let's try to fit the RV first, so we'll optimize q, vgamma, and t0_supconj.
Step10: Note that the optimizer options also contains the ability to set "priors". If set (to the label of a distribution set, these will be included in the cost function and can be used to limit the range that a parameter will be allowed to explore within the optimizer.
Step11: Interpreting the Returned Solution
Step12: And then adopt the final parameter values, via b.adopt_solution. By passing trial_run=True, we can see what changes will be made without actually changing the face-values in the bundle.
Step13: Note that by default, all fitted parameters will be adopted. But we can change this by setting adopt_parameters (in the solution) or by passing adopt_parameters directly to adopt_solution.
Step14: To see the affect of these new parameter-values on the model without adopting their values, we can also pass the solution directly to run_compute with sample_from.
Step15: Just by looking, we can see that this isn't quite perfect yet and could use some additional optimization, but is definitely a step in the right direction!
Step16: Here, for example, we see that our 'fastcompute' is ignoring the Rossiter-McLaughlin effect. In practice, since we have data in this region, this would be a cause for concern. For this example, our fake data was created using the same 'fastcompute' options... so we won't worry about it.
|
154 | <ASSISTANT_TASK:>
Python Code:
%%bash
pip freeze | grep tensor
!pip3 install tensorflow-hub==0.4.0
!pip3 install --upgrade tensorflow==1.13.1
import os
import tensorflow as tf
import numpy as np
import tensorflow_hub as hub
import shutil
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.13.1'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
categories_list = open("categories.txt").read().splitlines()
authors_list = open("authors.txt").read().splitlines()
content_ids_list = open("content_ids.txt").read().splitlines()
mean_months_since_epoch = 523
embedded_title_column = #TODO: use a Tensorflow Hub module to create a text embeddding column for the article "title".
# Use the module available at https://alpha.tfhub.dev/ filtering by German language.
embedded_content_column = #TODO: create an embedded categorical feature column for the article id; i.e. "content_id".
embedded_author_column = #TODO: create an embedded categorical feature column for the article "author"
category_column = #TODO: create a categorical feature column for the article "category"
months_since_epoch_boundaries = list(range(400,700,20))
months_since_epoch_bucketized = #TODO: create a bucketized numeric feature column of values for the "months since epoch"
crossed_months_since_category_column = #TODO: create a crossed feature column using the "category" and "months since epoch" values
feature_columns = [embedded_content_column,
embedded_author_column,
category_column,
embedded_title_column,
crossed_months_since_category_column]
record_defaults = [["Unknown"], ["Unknown"],["Unknown"],["Unknown"],["Unknown"],[mean_months_since_epoch],["Unknown"]]
column_keys = ["visitor_id", "content_id", "category", "title", "author", "months_since_epoch", "next_content_id"]
label_key = "next_content_id"
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column,record_defaults=record_defaults)
features = dict(zip(column_keys, columns))
label = features.pop(label_key)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def model_fn(features, labels, mode, params):
net = tf.feature_column.input_layer(features, params['feature_columns'])
for units in params['hidden_units']:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
# Compute logits (1 per class).
logits = tf.layers.dense(net, params['n_classes'], activation=None)
predicted_classes = tf.argmax(logits, 1)
from tensorflow.python.lib.io import file_io
with file_io.FileIO('content_ids.txt', mode='r') as ifp:
content = tf.constant([x.rstrip() for x in ifp])
predicted_class_names = tf.gather(content, predicted_classes)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
'class_ids': predicted_classes[:, tf.newaxis],
'class_names' : predicted_class_names[:, tf.newaxis],
'probabilities': tf.nn.softmax(logits),
'logits': logits,
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
table = tf.contrib.lookup.index_table_from_file(vocabulary_file="content_ids.txt")
labels = table.lookup(labels)
# Compute loss.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Compute evaluation metrics.
accuracy = tf.metrics.accuracy(labels=labels,
predictions=predicted_classes,
name='acc_op')
top_10_accuracy = #TODO: Compute the top_10 accuracy, using the tf.nn.in_top_k and tf.metrics.mean functions in Tensorflow
metrics = {
'accuracy': accuracy,
#TODO: Add top_10_accuracy to the metrics dictionary
}
tf.summary.scalar('accuracy', accuracy[1])
#TODO: Add the top_10_accuracy metric to the Tensorboard summary
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(
mode, loss=loss, eval_metric_ops=metrics)
# Create training op.
assert mode == tf.estimator.ModeKeys.TRAIN
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
outdir = 'content_based_model_trained'
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
estimator = tf.estimator.Estimator(
model_fn=model_fn,
model_dir = outdir,
params={
'feature_columns': feature_columns,
'hidden_units': [200, 100, 50],
'n_classes': len(content_ids_list)
})
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("training_set.csv", tf.estimator.ModeKeys.TRAIN),
max_steps = 200)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("test_set.csv", tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 30,
throttle_secs = 60)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
%%bash
head -5 training_set.csv > first_5.csv
head first_5.csv
awk -F "\"*,\"*" '{print $2}' first_5.csv > first_5_content_ids
output = #TODO: Use the predict method on our trained model to find the predictions for the examples contained in "first_5.csv".
import numpy as np
recommended_content_ids = [np.asscalar(d["class_names"]).decode('UTF-8') for d in output]
content_ids = open("first_5_content_ids").read().splitlines()
from google.cloud import bigquery
recommended_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(recommended_content_ids[0])
current_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(content_ids[0])
recommended_title = bigquery.Client().query(recommended_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
current_title = bigquery.Client().query(current_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
print("Current title: {} ".format(current_title))
print("Recommended title: {}".format(recommended_title))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If 'tensorflow-hub' isn't one of the outputs above, then you'll need to install it. Uncomment the cell below and execute the commands. After doing the pip install, click "Reset Session" on the notebook so that the Python environment picks up the new packages.
Step2: Build the feature columns for the model.
Step3: In the cell below we'll define the feature columns to use in our model. If necessary, remind yourself the various feature columns to use.
Step4: Create the input function.
Step5: Create the model and train/evaluate
Step6: Train and Evaluate
Step7: Make predictions with the trained model.
Step8: Recall, to make predictions on the trained model we pass a list of examples through the input function. Complete the code below to make predicitons on the examples contained in the "first_5.csv" file we created above.
Step11: Finally, we'll map the content id back to the article title. We can then compare our model's recommendation for the first of our examples. This can all be done in BigQuery. Look through the query below and make sure it is clear what is being returned.
|
155 | <ASSISTANT_TASK:>
Python Code:
# Import dependencies
from __future__ import division, print_function
%matplotlib inline
import scipy
import sympy
from sympy import Symbol, symbols, Matrix, sin, cos, latex
from sympy.interactive import printing
printing.init_printing()
sympy.init_printing(use_latex="mathjax", fontsize='16pt')
from BicycleTrajectory2D import *
from BicycleUtils import *
from FormatUtils import *
from PlotUtils import *
[N, dt, wheel_distance] = [300, 0.05, 1.1] # simulation parameters
add_noise = True # Enable/disable gaussian noise
# Define initial state --------------------------------------------------------
delta = math.radians(6) # steering angle
phi = math.radians(0) # Lean angle
X_init = np.array([1.0, 3.0, 0.0, np.tan(delta)/wheel_distance, 0.0, phi]) # [x, y, z, sigma, psi, phi]
# Define constant inputs ------------------------------------------------------
U_init = np.array([1.0, 0.01, 0.01]) # [v, phi_dot, delta_dot]
# Define standard deviation for gaussian noise model --------------------------
# [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
if add_noise:
noise = [0.5, 0.5, 0.5, 0.5, 0.1, 0.1, 0.1, 0.01, 0.01, 0.01]
else:
noise = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
# Create object simulator ------------------------------------------------------
bike = BicycleTrajectory2D(X_init=X_init, U_init=U_init, noise=noise)
# Simulate path ----------------------------------------------------------------
(gt_sim, zs_sim, time) = bike.simulate_path(N=N, dt=dt)
# Plot simulation results
#plot_results(xs=[], zs_sim=zs_sim, gt_sim=gt_sim, time=time, plot_xs=False)
x, y, v, psi, phi, delta, time, w, sigma, z = symbols('x y v \psi \phi \delta T w sigma z')
delta_dot, v_dot, phi_dot = symbols('delta_dot v_dot phi_dot')
fxu = Matrix([[x + time*v*sympy.cos(psi)],
[y + time*v*sympy.sin(psi)],
[z],
[sigma + (time*delta_dot/w)*(1+(w**2)*(sigma**2))],
[psi + time*v*sigma/sympy.cos(phi)],
[phi + time*phi_dot]
])
state = Matrix([x, y, z, sigma, psi, phi]) # Define state
F = fxu.jacobian(state) # Compute Jacobian of F with respecto to states
inputs = Matrix([v, phi_dot, delta_dot]) # Define inputs
V = fxu.jacobian(inputs) # Compute Jacobian of F with respecto to inputs
fxu
#print (latex(fxu))
F
#print (latex(F))
V
#print (latex(V))
class EKF_sigma_model_fusion(object):
Implements an EKF to bicycle model
def __init__(self, xs, P, R_std, Q_std, wheel_distance=1.2, dt=0.1, alpha=1.0):
self.w = wheel_distance #Set the distance between the wheels
self.xs = xs *0.0 #Set the initial state
self.P = P #Set the initial Covariance
self.dt = dt
self.R_std = R_std
self.Q_std = Q_std
self.alpha = alpha
#self.K = np.zeros((6, 6)) # Kalman gain
self.K = np.eye(6)*0.5 # Kalman gain
#Set the process noise covariance
self.Q = np.diag([self.Q_std[0], # v
self.Q_std[1], # phi_dot
self.Q_std[2] # delta_dot
])
# Set the measurement noise covariance
self.R = np.diag([self.R_std[0], # xf
self.R_std[1], # xr
self.R_std[2], # yf
self.R_std[3], # yr
self.R_std[4], # zf
self.R_std[5], # zr
self.R_std[6], # za
self.R_std[7], # sigma
self.R_std[8], # psi
self.R_std[9]]) # phi
# Linear relationship H - z = Hx
self.H = np.zeros((10, 6)) # 10 measurements x 6 state variables
[self.H[0, 0], self.H[1, 0]] = [1.0, 1.0] # x
[self.H[2, 1], self.H[3, 1]] = [1.0, 1.0] # y
[self.H[4, 2], self.H[5, 2], self.H[6, 2]] = [1.0, 1.0, 1.0] # z
[self.H[7, 3], self.H[8, 4], self.H[9, 5]] = [1.0, 1.0, 1.0] # sigma - psi - phi
def Fx(self, xs, u):
Linearize the system with the Jacobian of the x
F_result = np.eye(len(xs))
v = u[0]
phi_dot = u[1]
delta_dot = u[2]
sigma = xs[3]
psi = xs[4]
phi = xs[5]
t = self.dt
F04 = -t * v * np.sin(psi)
F14 = t * v * np.cos(psi)
F33 = (2 * t * delta_dot * sigma * self.w) + 1
F43 = (t * v)/np.cos(phi)
F45 = t * sigma * v * np.sin(phi) / np.cos(phi)**2
F_result[0, 4] = F04
F_result[1, 4] = F14
F_result[3, 3] = F33
F_result[4, 3] = F43
F_result[4, 5] = F45
return F_result
def Fu(self, xs, u):
Linearize the system with the Jacobian of the u
v = u[0]
phi_dot = u[1]
delta_dot = u[2]
sigma = xs[3]
psi = xs[4]
phi = xs[5]
t = self.dt
V_result = np.zeros((len(xs), len(u)))
V00 = t * np.cos(psi)
V10 = t * np.sin(psi)
V32 = (t/self.w)*((sigma**2)*(self.w**2) + 1)
V40 = t * sigma / np.cos(phi)
V51 = t
V_result[0, 0] = V00
V_result[1, 0] = V10
V_result[3, 2] = V32
V_result[4, 0] = V40
V_result[5, 1] = V51
return V_result
def f(self, xs, u):
Estimate the non-linear state of the system
v = u[0]
phi_dot = u[1]
delta_dot = u[2]
sigma = xs[3]
psi = xs[4]
phi = xs[5]
t = self.dt
fxu_result = np.zeros((len(xs), 1))
fxu_result[0] = xs[0] + t * v * np.cos(psi)
fxu_result[1] = xs[1] + t * v * np.sin(psi)
fxu_result[2] = xs[2]
fxu_result[3] = xs[3] + (t*phi_dot/self.w)*((sigma**2)*(self.w**2) +1)
fxu_result[4] = xs[4] + t * v * sigma / np.cos(phi)
fxu_result[5] = xs[5] + t * phi_dot
return fxu_result
def h(self, x):
takes a state variable and returns the measurement
that would correspond to that state.
sensor_out = np.zeros((10, 1))
sensor_out[0] = x[0]
sensor_out[1] = x[0]
sensor_out[2] = x[1]
sensor_out[3] = x[1]
sensor_out[4] = x[2]
sensor_out[5] = x[2]
sensor_out[6] = x[2]
sensor_out[7] = x[3] # sigma
sensor_out[8] = x[4] # psi
sensor_out[9] = x[5] # phi
return sensor_out
def Prediction(self, u):
x_ = self.xs
P_ = self.P
self.xs = self.f(x_, u)
self.P = self.alpha * self.Fx(x_, u).dot(P_).dot((self.Fx(x_,u)).T) + \
self.Fu(x_,u).dot(self.Q).dot((self.Fu(x_,u)).T)
def Update(self, z):
Update the Kalman Prediction using the meazurement z
y = z - self.h(self.xs)
self.K = self.P.dot(self.H.T).dot(np.linalg.inv(self.H.dot(self.P).dot(self.H.T) + self.R))
self.xs = self.xs + self.K.dot(y)
self.P = (np.eye(len(self.xs)) - self.K.dot(self.H)).dot(self.P)
np.random.seed(850)
file_name = "filters/EKF/math_model/"
[N, dt, wheel_distance, number_state_variables] = [300, 0.05, 1.1, 6]
delta = math.radians(6)
phi = math.radians(0)
#%prun some_useless_slow_function()
U_init = np.array([1.0, 0.01, 0.01]) # [v, phi_dot, delta_dot]
X_init = np.array([1.0, 3.0, 0.0, np.tan(delta)/wheel_distance, 0.0, phi]) # [x, y, z, sigma, psi, phi]
# noise = [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
if add_noise:
noise = [0.5, 0.5, 0.5, 0.5, 0.1, 0.1, 0.1, 0.01, 0.01, 0.01]
file_name += "noise/"
else:
noise = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
file_name += "no_noise/"
bike = BicycleTrajectory2D(X_init=X_init, U_init=U_init, w=wheel_distance, noise=noise)
(gt_sim, zs_sim, time_t) = bike.simulate_path(N=N, dt=dt)
alpha = 1.0
# covariance matrix
P = np.eye(number_state_variables) * 1e0
# Input noise covariance M
M_std = [(0.001)**2, (0.001)**2, (0.001)**2 ] # v, phi_dot, delta_dot
# Measurement noise covariance R
# [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
R_std = [0.8**2, 0.8**2, # x
0.8**2, 0.8**2, # y
0.5**2, 0.5**2, 0.5**2, # z
0.5**2, 0.4**2, 1.8**2] # delta - psi - phi
filter_ekf = EKF_sigma_model_fusion(X_init, P, R_std=R_std, Q_std=M_std, wheel_distance=wheel_distance, dt=dt, alpha=alpha)
xs = np.zeros((N, number_state_variables))
ps = np.zeros((N, number_state_variables, number_state_variables))
PU = np.zeros((N, number_state_variables))
KU = np.zeros((N, number_state_variables))
time_t = np.zeros((N, 1))
t = 0
z_t = np.zeros((10, 1))
for i in range(N):
P = filter_ekf.P
K = filter_ekf.K
PU[i] = [P[0,0], P[1,1], P[2,2], P[3,3], P[4,4], P[5,5]]
KU[i] = [K[0,0], K[1,1], K[2,2], K[3,3], K[4,4], K[5,5]]
xs[i] = filter_ekf.xs.T
xs[i, 3] = np.arctan2(xs[i, 3], 1/wheel_distance) # sigma to delta conversion
# predict
filter_ekf.Prediction(U_init)
# update measurements [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
z_t[0] = zs_sim[i].xf
z_t[1] = zs_sim[i].xr
z_t[2] = zs_sim[i].yf
z_t[3] = zs_sim[i].yr
z_t[4] = zs_sim[i].zf
z_t[5] = zs_sim[i].zr
z_t[6] = zs_sim[i].za
z_t[7] = np.tan(zs_sim[i].delta)/wheel_distance # sigma
z_t[8] = zs_sim[i].psi # psi
z_t[9] = zs_sim[i].phi # phi
filter_ekf.Update(z_t)
cov = np.array([[P[0, 0], P[2, 0]],
[P[0, 2], P[2, 2]]])
mean = (xs[i, 0], xs[i, 1])
#plot_covariance_ellipse(mean, cov, fc='g', std=3, alpha=0.3, title="covariance")
time_t[i] = t
t += dt
filter_ekf.time_t = t
filter_name = 'EKF'
(gt, zs) = convert_object_to_array(gt_sim, zs_sim)
plot_filter_results(xs, gt, zs, time_t, file_name, filter_name)
plot_EKF_gain_covariance(time_t, KU, PU, file_name, autoscale_axis=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulation of kinematic motion model
Step2: Implementation of EKF for $\sigma$-model
Step3: Define state transition function $F$
Step4: Compute Jacobian of $F$ with respect to state $x$
Step5: Compute Jacobian of $F$ with respect to inputs $u$
Step12: Implement EKF filter
Step13: Execute EKF
Step14: Plot Kalman gain and process covariance
|
156 | <ASSISTANT_TASK:>
Python Code:
#Importamos el modulo numpy con el alias np
import numpy as np
#Creo un array
a = np.array([1,0,0])
a
type(a)
#Ejemplo creo una lista de Python de 0 a 1000 y calculo el cuadrado de cada elemento
L = range(1000)
%%timeit
[i**2 for i in L]
#Ahora hago lo mismo con Numpy
a = np.arange(1000)
%%timeit
a**2
print 'Numpy es '
print 111/5.4
print 'veces mas rapido'
#Creando arrays
a = np.array([1,1,1]) # 1D
b = np.array([[1,1,1],[1,1,1]]) #2d (Matrix)
c = np.array([[[1,1,1],[1,1,1],[1,1,1]]]) #3D (Tensor...)
print a.shape
print b.shape
print c.shape
#Podemos crear arrays predeterminados con funciones muy utiles
a = np.arange(10) # un array de enteros 0 a 10(no lo incluye)
a
# Array de 1 a 9 con paso 2
b = np.arange(1, 9, 2)
b
#Como en Matlab
a = np.ones((3,3))
a
b = np.zeros((5,5))
b
c = np.eye(10)
c
d = np.diag(np.array([1, 2, 3, 4]))
d
#Complex numbers
e = np.array([1+2j, 3+4j, 5+6*1j])
e
#boolean
e = np.array([True, False, False, True])
e
#String
f = np.array(['Bonjour', 'Hello', 'Hallo',])
f
a = np.arange(10)
a
#creo una tupla
a[0],a[2],a[-1]
# Slicing [comienzo:final:paso]
# Los tres no son necesarios explicitamente ya que por default comienzo=0, final=[-1] y el paso=1
a[2:8:2]
a[::4] # solo cambiamos el paso
np.random.seed(3)
a = np.random.random_integers(0, 20, 15)
a
(a % 3 == 0)
mask = (a % 3 == 0)
a_multiplos_3 = a[mask]
a_multiplos_3
#Puedo indexar y asignar al mismo tiempo
a[a % 3 == 0] = -1
a
a = np.array([1, 2, 3, 4])
a
#todas las operaciones aritmeticas funcionan
a+1
j = np.arange(5)
2**(j + 1) - j
#Multiplicacion ES ELEMENTO A ELEMENTO
a * a
#Para hacer multiplicaciones matriciales usamos dot
b = np.random.rand(3,3)
c = np.random.rand(3,3)
np.dot(b,c)
#cado objeto ndarray tiene muuchos metodos eje
c.sum()
%%file hellofortran.f
C File hellofortran.f
subroutine hellofortran (n)
integer n
do 100 i=0, n
print *, "Hola Soy Fortran tengo muuchos años"
100 continue
end
!f2py -c -m hellofortran hellofortran.f
%%file hello.py
import hellofortran
hellofortran.hellofortran(5)
# corremos el script
!python hello.py
# Esta no es la mejor implementacion
#Porque el loop esta implementado en Python
def py_dcumsum(a):
b = np.empty_like(a)
b[0] = a[0]
for n in range(1,len(a)):
b[n] = b[n-1]+a[n]
return b
%%file dcumsum.f
c File dcumsum.f
subroutine dcumsum(a, b, n)
double precision a(n)
double precision b(n)
integer n
cf2py intent(in) :: a
cf2py intent(out) :: b
cf2py intent(hide) :: n
b(1) = a(1)
do 100 i=2, n
b(i) = b(i-1) + a(i)
100 continue
end
!f2py -c dcumsum.f -m dcumsum
#importamos el modulo recien creado
import dcumsum
a = np.array([1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0])
py_dcumsum(a)
dcumsum.dcumsum(a)
a = np.random.rand(10000)
%%timeit
py_dcumsum(a)
%%timeit
dcumsum.dcumsum(a)
%%timeit
a.cumsum()
%run srinivasan_pruebas.py
%run srinivasan_pruebas_vec.py
#para que los graficos queden empotrados
%pylab inline
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
C, S = np.cos(X), np.sin(X)
plot(X, C)
plot(X, S)
t = 2 * np.pi / 3
plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine")
plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
plot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle="--")
scatter([t, ], [np.cos(t), ], 50, color='blue')
annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$',
xy=(t, np.sin(t)), xycoords='data',
xytext=(+10, +30), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plot([t, t],[0, np.sin(t)], color='red', linewidth=2.5, linestyle="--")
scatter([t, ],[np.sin(t), ], 50, color='red')
annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$',
xy=(t, np.cos(t)), xycoords='data',
xytext=(-90, -50), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
###variable declarations
nx = 101
ny = 101
nt = 80
c = 1
dx = 2.0/(nx-1)
dy = 2.0/(ny-1)
sigma = .2
dt = sigma*dx
x = np.linspace(0,2,nx)
y = np.linspace(0,2,ny)
u = np.ones((ny,nx)) ##create a 1xn vector of 1's
v = np.ones((ny,nx))
un = np.ones((ny,nx))
vn = np.ones((ny,nx))
###Assign initial conditions
u[.5/dy:1/dy+1,.5/dx:1/dx+1]=2 ##set hat function I.C. : u(.5<=x<=1 && .5<=y<=1 ) is 2
v[.5/dy:1/dy+1,.5/dx:1/dx+1]=2 ##set hat function I.C. : u(.5<=x<=1 && .5<=y<=1 ) is 2
for n in range(nt+1): ##loop across number of time steps
un[:] = u[:]
vn[:] = v[:]
u[1:,1:]=un[1:,1:]-(un[1:,1:]*dt/dx*(un[1:,1:]-un[0:-1,1:]))-vn[1:,1:]*dt/dy*(un[1:,1:]-un[1:,0:-1])
v[1:,1:]=vn[1:,1:]-(un[1:,1:]*dt/dx*(vn[1:,1:]-vn[0:-1,1:]))-vn[1:,1:]*dt/dy*(vn[1:,1:]-vn[1:,0:-1])
u[0,:] = 1
u[-1,:] = 1
u[:,0] = 1
u[:,-1] = 1
v[0,:] = 1
v[-1,:] = 1
v[:,0] = 1
v[:,-1] = 1
from matplotlib import cm ##cm = "colormap" for changing the 3d plot color palette
fig = plt.figure(figsize=(11,7), dpi=100)
ax = fig.gca(projection='3d')
X,Y = np.meshgrid(x,y)
ax.plot_surface(X,Y,u, cmap=cm.coolwarm)
%run MagicCube/code/cube_interactive.py 5
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python posee por defecto un tipo de datos que se asemeja(listas), pero es numéricamente ineficiente
Step2: Caracteristicas y utilidades principales
Step3: Indexing and slicing
Step4: Fancy indexing
Step5: Elementwise operations
Step6: Optimizaciones con Fortran y C
Step7: Generamos un modulo de Python con f2py
Step8: Importamos el modulo que generamos y lo utilizamos
Step9: Ejemplo 2
Step10: Ahora hacemos la implementacion en Fortran
Step11: Compilamos directamente a un modulo de python
Step12: Ahora los ponemos a prueba
Step13: Guauuuu¡¡¡¡
Step14: <img src="files/logo2.png" style="float
Step15: Ejemplos
Step16: Cubo magico hecho solo con Matplotlib
|
157 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
158 | <ASSISTANT_TASK:>
Python Code:
from scipy import stats
import numpy as np
# making kde
values = np.arange(10)
kde = stats.gaussian_kde(values)
np.median(kde.resample(100000))
def KDE_make_means(kde, size=10):
func = lambda x : np.random.randint(0, x.n, size=x.d)
kde.means = [kde.dataset[:, func(kde)] for i in xrange(size)]
kde.means_n = len(kde.means)
#kde.dataset = None
#KDE_make_means(kde)
def KDE_save_means(kde, size):
indices = np.random.randint(0, kde.n, size=size)
kde.means = kde.dataset[:, indices]
kde.means_size = size
KDE_save_means(kde, size=10)
kde.dataset = None
def KDE_resample(kde):
norm = np.transpose(np.random.multivariate_normal(np.zeros((kde.d,), float), kde.covariance,
size=kde.means_size))
print(kde.means)
return kde.means + norm
KDE_resample(kde)
%load_ext rpy2.ipython
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
library(vegan)
%%R
data(varespec)
varespec %>% head
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notes
|
159 | <ASSISTANT_TASK:>
Python Code:
#importando bibliotecas que iremos usar
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import warnings
import os
from numpy import arange
from scipy.stats import skew
from sklearn.utils import shuffle
from scipy.stats.stats import pearsonr
from sklearn import cross_validation, metrics
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.linear_model import Ridge, Lasso
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import ElasticNet as ElasticNetImpl
from sklearn.preprocessing import LabelEncoder
from subprocess import check_output
from sklearn.utils import shuffle
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler
from scipy import stats
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.grid_search import GridSearchCV
warnings.filterwarnings('ignore')
fifa = pd.read_csv('CompleteDataset.csv')
def extrai(value):
out = value.replace('€', '')
if 'M' in out:
out = float(out.replace('M', ''))*1000000
elif 'K' in value:
out = float(out.replace('K', ''))*1000
return float(out)
fifa['Value'] = fifa['Value'].apply(lambda x: extrai(x))
fifa['Wage'] = fifa['Wage'].apply(lambda x: extrai(x))
fifa = shuffle(fifa)
train = fifa.iloc[:15000]
test = fifa.iloc[15000:]
x = [ 'Potential', 'Overall', 'Wage', 'Age', 'Special'] #atributos utilizados para calcular um value
y = ['Value'] #objetivo
var = 'Value'
data = pd.concat([train['Overall'], train[var]], axis=1)
data.plot.scatter(x=var, y='Overall', ylim=(60,100), xlim=(0,150000000));
var = 'Wage'
data = pd.concat([train['Overall'], train[var]], axis=1)
data.plot.scatter(x=var, y='Overall', ylim=(40,100), xlim=(0,600000));
corr = fifa.drop('ID', axis = 1).corr()
fig = plt.figure(figsize=(20,16))
ax = sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
linewidths=0.25,
vmax=1.0,
square=True,
cmap = 'PuBu',
linecolor='black',
annot=False
)
train.drop(["Photo", "Flag","Club Logo","Name"],1,inplace=True)
train.drop("ID",1,inplace=True)
l_encode = LabelEncoder()
obj_feat = ["Club", "Nationality","Preferred Positions"]
for var in obj_feat:
train[var] = l_encode.fit_transform(train[var].astype(str))
train.shape
def clean_values(x):
try:
if len(x)>2:
y = x[:2]
return y
else:
return x
except TypeError:
return x
columns_to_clean = [col for col in train.columns if col not in ["Age","Nationality",
"Overall","Potential",
"Club","Value","Wage",
"Special"]]
for col in columns_to_clean:
train[col] = train[col].apply(lambda x : clean_values(x))
train = train.dropna(axis=1, how="any")
def modelfit(alg, dtrain, features, performCV=True, printFeatureImportance=True, cv_folds=10):
alg.fit(dtrain[features],dtrain["Value"] )
dtrain_predictions = alg.predict(dtrain[features])
cv_score = cross_validation.cross_val_score(alg, dtrain[features], dtrain["Value"], cv=cv_folds,
scoring='neg_mean_squared_error')
cv_score = np.sqrt(np.abs(cv_score))
print ("\nModel Report")
print ("RMSE : %.4g" % np.sqrt(metrics.mean_squared_error(dtrain["Value"], dtrain_predictions)))
print ("CV Score : Mean - %.4g | Std - %.4g | Min - %.4g | Max - %.4g" % (np.mean(cv_score),
np.std(cv_score),np.min(cv_score),
np.max(cv_score)))
if printFeatureImportance:
feat_imp = pd.Series(alg.feature_importances_, features).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
#Grau de correlação entre outras variáveis, em relação ao valor
features = [i for i in train.columns if i != "Value"]
target = "Value"
gbm0 = GradientBoostingRegressor(random_state=7)
modelfit(gbm0, train, features)
#Jogadores com salário igual a 0.8K foram lidos como sendo 0. Para corrigir isso, colocamos valores e salários abaixo de 1K como
#sendo iguais a 1K (arredondamento pra cima).
train.Value[train.Value==0]=1
train.Wage[train.Wage==0]=1
sns.distplot(np.log(train['Value']), fit=norm);
fig = plt.figure()
res = stats.probplot(np.log(train['Value']), plot=plt)
def ridge_regression(train, x, alpha):
ridgereg = Ridge(alpha=alpha,normalize=True)
ridgereg.fit(train[x],train['Value'])
y_pred = ridgereg.predict(train[x])
return(y_pred)
ridge = ridge_regression(train, x, 1e-20)
plt.plot(train['Value'],ridge,'.', color="blue")
plt.axis([0, 130000000, 0, 130000000])
plt.xlabel("Valor real")
plt.ylabel("Valor premeditado")
plt.show()
r_R = ridge_regression(test, x, 1e-20)
print((mean_squared_error(test['Value'],r_R))**(1/2))
ridgetest = ridge_regression(test, x, 1e-20)
plt.plot(test['Value'],ridgetest,'.', color="red")
plt.axis([0, 130000000, 0, 130000000])
plt.xlabel("Valor real")
plt.ylabel("Valor premeditado")
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importa a planilha que contem os dados utilizados. Esta planilha foi importada de www.kaggle.com
Step2: Normaliza a coluna 'Value' (valor do jogador) e a coluna 'Wage' (salário do jogador por semana)
Step3: Embaralha e divide a planilha em 2. Uma contendo 15000 jogadores (train) e outra com aproximadamente 3000 jogadores (test)
Step4: Escolhendo os valores que serão utilizados para calcular nosso objetivo, que é o "Value"
Step5: Verificando o quanto Overall influencia no Valor do jogador
Step6: Verificando o quanto Overall influencia no Valor do jogador
Step7: Verifica, através da cor azul, quais as variáveis que mais influenciam a variável 'Value'. Quanto mais próximo de azul escuro for o quadrado da intersecção, maior é a correlação entre as duas variáveis.
Step8: Faremos, agora, a correlação entre as variáveis, com o alvo em Value. Agora o gráfico será plotado em barras e possuirá qual o valor da correlação entre as variáveis
Step9: O primeiro gráfico apresenta a frequência dos valores dos jogadores. Já o segundo gráfico traduz o primeiro em pontos e possui uma reta que satisfaz estes vários pontos. Nota-se que há uma sequência no segundo gráfico de valores iguais a zero. Isso ocorre pois os valores entre os salários dos jogadores são muito discrepantes (exemplo
Step10: Função que realiza a regressão de Ridge na planilha Treino
Step11: Regressão de Ridge na planilha de teste
|
160 | <ASSISTANT_TASK:>
Python Code:
x = 5
y = 7
x2 = -3 # oops maybe the choice of variable names is not optimal
y2 = 17
x3 = x + x2
y3 = y + y2
print(x3, y3)
from math import sqrt
length_3 = sqrt(x3 * x3 + y3 * y3)
print length_3
length_1 = sqrt(x * x + y * y)
print length_1
def length(x, y):
return sqrt(x* x + y * y)
def vector_sum(vx1, vy1, vx2, vy2):
x = vx1 + vx2
y = vy1 + vy2
return x, y #Nice, there's no restriction on the amount of return values.
def scalar_prod(vx1, vy1, vx2, vy2):
return (vx1 * vx2 + vy2 * vy2)
def length(vx1, vy1):
return sqrt(scalar_prod(vx1, vy1, vx1, vy1)) #hmmmm, a function of a function
print length(3, 4)
def functionname( parameters ):
"function_docstring"
function_suite
return [expression]
def foo( s ):
print(s)
foo()
def foo(name, age):
print("name:", name)
print("age: ", age)
foo('Alice', 35)
print('----')
foo(19, 'Bob')
print('----')
foo(age = 78, name = 'Dave')
def foo(name, age = 23):
print("name:", name)
print("age: ", age)
foo('Alice')
foo('Bob', 29)
def vector_sum(vx1, vy1, vx2, vy2, vz1=None, vz2=None):
if vz1 == None or vz2 == None: # making sure that both values are set
return vx1 + vx2, vy1 + vy2
else:
return vx1 + vx2, vy1 + vy2, vz1 + vz2
print vector_sum(1, 2, 3, 4)
print vector_sum(1, 2, 3, 4, 8,8)
import datetime as dt # What's that?
print dt.datetime.now()
def log_time(message, time=dt.datetime.now()):
print("{0}: {1}".format(time.isoformat(), message))
log_time("message 1")
log_time("message 2")
log_time("message 3")
def foo(value, a=[]):
a.append(value)
return a
print(foo(1))
print(foo(2))
print(foo('uh oh'))
def foo(value, a=None):
if a == None:
a = []
a.append(value)
return a
print(foo(1))
print(foo(2))
print(foo('yeah!!!'))
def foo(arg1, *argtuple):
print(arg1)
for arg in argtuple:
print(arg)
foo('Hello World')
foo('x', 1, 'hello', 3.1415, '20')
foo()
class Vehicle:
pass
car = Vehicle()
print(car)
class Vehicle():
def __init__(self, number_of_wheels, number_of_seats, max_velocity):
self.number_of_wheels = number_of_wheels
self.number_of_seats = number_of_seats
self.max_velocity = max_velocity
car = Vehicle(4, 5, 200.0)
bike = Vehicle(2, 2, 50.0)
print car.number_of_wheels
print bike.number_of_seats
class Vehicle():
def __init__(self, number_of_wheels, number_of_seats, max_velocity):
self.number_of_wheels = number_of_wheels
self.number_of_seats = number_of_seats
self.max_velocity = max_velocity
def get_max_velocity(self):
return self.max_velocity
def make_noise(self):
print("vrummmmm")
car = Vehicle(4, 5, 200.0)
print car.get_max_velocity()
print car.max_velocity
car.make_noise()
from math import cos, sin, sqrt
class MyVector():
def __init__(self, x=None, y=None, r=None, phi=None):
if x == None or y == None:
self.x = r * cos(phi)
self.y = r * sin(phi)
else:
self.x = x
self.y = y
def get_length(self):
return sqrt(self.x * self.x + self.y * self.y)
def scale_vector(self, a):
self.x *= a
self.y *= a
def add_vector(self, vector):
self.x += vector.x
self.y += vector.y
v1 = MyVector(10, 20)
print v1.get_length()
v2 = MyVector(r=10, phi=2.7)
print v2.get_length()
v1.add_vector(v2)
print v1.get_length()
v1 = MyVector(10, 20)
v2 = MyVector(20, 10)
v3 = v1 + v2
from math import cos, sin, sqrt
class MyVector():
def __init__(self, x=None, y=None, r=None, phi=None):
if x == None or y == None:
self.x = r * cos(phi)
self.y = r * sin(phi)
else:
self.x = x
self.y = y
def get_length(self):
return sqrt(self.x * self.x + self.y * self.y)
def scale_vector(self, a):
self.x *= a
self.y *= a
def add_vector(self, vector):
self.x += vector.x
self.y += vector.y
def __add__(self, other):
x = self.x + other.x
y = self.y + other.y
return MyVector(x, y)
v1 = MyVector(10, 20)
v2 = MyVector(20, 10)
v3 = v1 + v2
print(v3.x, v3.y)
from math import cos, sin, sqrt
class MyVector():
def __init__(self, x, y):
self.x = x
self.y = y
def get_length(self):
return sqrt(self.x * self.x + self.y * self.y)
def scale_vector(self, a):
self.x *= a
self.y *= a
def add_vector(self, vector):
self.x += vector.x
self.y += vector.y
def __add__(self, other):
x = self.x + other.x
y = self.y + other.y
return MyVector(x, y)
def get_r(self):
return self.get_length()
class MyPolarVector(MyVector):
def __init__(self, r, phi):
self.r = r
MyVector.__init__(self, r * cos(phi), r * sin(phi))
def get_length(self):
print "inside MyPolarVector"
return self.r
def get_r(self):
return self.r
v1 = MyVector(10, 20)
v2 = MyPolarVector(12, 2.7)
print (v1 + v2).get_r()
print v1.get_length()
print v2.get_length()
print (v1 + v2).get_length()
my_list = [1, 2, 3, 4, 5]
sum = 0
for x in my_list:
sum += x
print(sum)
def do_add(a_list):
sum = 0
if type(a_list) is list:
for x in a_list:
sum += x
return sum
my_list = [1, 2, 3, 4, 5]
print(do_add(my_list))
import functools
my_list = [1, 2, 3, 4, 5]
def add(x, y):
return (x + y)
sum = functools.reduce(add, my_list)
# calculates add(add(add(add(1, 2), 3), 4), 5)
print(sum)
class ChangeList:
def __init__(self, a_list):
self.my_list = []
if type(a_list) is list:
self.my_list = a_list
def do_add(self):
self.my_sum = 0
for x in self.my_list:
self.my_sum += x
create_sum_obj = ChangeList([1, 2, 3, 4, 5])
create_sum_obj.do_add()
print(create_sum_obj.my_sum)
class ListNode:
def __init__(self, data):
"constructor to initiate this object"
# store data
self.data = data
# store reference (next item)
self.next = None
return
def contains_value(self, value):
"method to compare the value with the node data"
if self.data == value:
return True
else:
return False
node1 = ListNode('data point 1')
node2 = ListNode([15, 16 , 3.14])
node3 = ListNode(16.6)
class SingleLinkedList:
def __init__(self):
"constructor to initiate this object"
self.head = None
self.tail = None
return
def list_length(self):
pass
def output_list(self):
pass
def add_list_item(self, item):
pass
def unordered_search(self):
pass
def remove_list_item_by_id(self):
pass
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And here is another one
Step2: In our example we want to apply some vector arithmetics.
Step3: Now, what is the length of this new vector?
Step4: How does this length compare to the length of the first vector?
Step5: Hm, that's kind of annoying.
Step6: What about other functions?
Step7: Looks like we can re-write our length function
Step8: Let's give it a try
Step9: Functions seem to be a powerful programming tool.
Step10: We have dealt with the actual function code.
Step11: keyword arguments
Step12: default arguments
Step13: The application of default arguments seem to be obvious.
Step14: Careful with the amount of code inside a function (7$\pm$2 rule)!
Step15: One more example
Step16: Here's how it should look like
Step17: variable-length arguments
Step18: Procedural programming seems to be enourmos improvement over imperative programming.
Step19: Obviously this class is pretty boring. Nevertheless, we can now create objects of type Vehicle by instanciating
Step20: We just created an object that we called car of type Vehicle.
Step21: We call __init__ the constructor method.
Step22: We have know now all the basics to actually implement our vector problem $\Rightarrow$
Step23: That's almost everything we had defined before.
Step24: Wait, there's more!
Step25: That was a crash course in OO programming.
Step26: The focus of imperative programming is on how a program operates. It changes state information as needed in order to achieve a goal.
Step27: The procedural style relies on procedure calls to create modularized code.
Step28: The functional coding style treats everything like a math equation.
Step29: The object-oriented coding style is all about increasing the ability of applications to reuse code and making code easier to understand. The encapsulation that object-orientation provides allows developers to treat code as a black box.
Step30: No we want to write a class that represents a linked list. We identify the following methods as required
|
161 | <ASSISTANT_TASK:>
Python Code:
# meld is a great visual difference program
# http://meldmerge.org/
# the following command relies on the directory structure on my computer
# tdd-demo comes from https://github.com/james-prior/tdd-demo/
!cd ~/projects/tdd-demo;git difftool -t meld -y 389df2a^ 389df2a
False or False
0 or False
False or 0
0 or 0
False or True
True or False
True or True
True or 1
1 or True
1 or 1
1 > 2
3 < 4
# This kind of expression using the "or" operator is very typical,
# comprising the vast majority of use.
1 > 2 or 3 < 4
'hello' or 'world'
'' or 'world'
'hello' or ''
'' or ''
'' or None
False or 3.14
'False' or 3.14
bool('False' or 3.14)
[] or {}
'' or []
'' or {}
'' or (1, 3)
'' or 'False'
'' or 'True'
'' or True
'' or False
values = (
None,
0,
0.0,
0j,
(),
[],
{},
set(),
False,
True,
True + True,
(True + True + True) / True,
1,
-1,
1.e-30,
'',
'False',
'True',
[],
[None], # This fools many people.
[0],
[0.0],
[0j],
[1],
[1, 2],
[[]], # This fools many people.
[{}],
[()],
[],
(),
(None,),
(0,),
(0.0,),
(0j,),
(1,),
(1, 2),
([],),
({},),
((),),
(),
{},
{None: None},
{False: None},
{'False': None},
set(),
{None},
{0},
{0.0},
{0j},
{1},
{1, 2},
{()},
)
for value in values:
print(repr(value), type(value))
print(bool(value))
print()
True + True
True / (True + True)
True // (True + True)
'' or 1
'' or 2
'fizz' or 3
'buzz' or 5
'fizz' or 6
'fizzbuzz' or 15
'' or 16
False or 0 or 0j or 0.0 or [] or {} or set() or None or ()
False or 0 or 0j or 0.0 or 'false' or [] or {} or set() or None or ()
from functools import reduce
a = (
False,
0,
0j,
0.0,
[],
{},
'look ma no hands',
set(),
None,
(),
)
reduce(lambda x, y: x or y, a)
import operator
[s for s in dir(operator) if 'or' in s]
def foo(p=None):
p = p or [1, 2, 3, 4]
return p
foo(5)
foo()
def foo(p=[1, 2, 3, 4]):
return p
foo(3)
foo()
a = foo()
a[1] = 'hi mom'
a
foo()
def foo(p=None):
p = p or [1, 2, 3, 4]
return p
b = foo()
b
b[2] = 'this'
b
foo()
foo([1])
foo([])
foo(0)
def foo(p=None):
if p is None:
p = [1, 2, 3, 4]
return p
foo()
foo(None)
foo([1])
foo([])
foo(0)
'this' or 'that'
'give me liberty' or 'give me death'
False and 1
'False' and 1
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python's or
Step2: Python's concept of truthiness
Step3: Slowly scroll through the output of the following cell,
Step4: There was some confusion and disbelief
Step5: Now we get to how the "or" operator was used in fizzbuzz().
Step6: Now we get to a more serious discussion of when it is good to use
Step7: Pete Carswell asked about doing the above long expressions with a lamdba,
Step8: Note that the reduce() evaluates all the elements of its second operand,
Step9: Unfortunately, I was not able to find an equivalent to the "or" operator.
Step10: It is too bad that there is not an or= operator.
Step11: Zach prefers his code above to code below,
Step12: The cell above changes the mutable default argument,
Step13: Zach's version does not suffer from the mutable default argument problem.
Step14: How can I screw up Zach's version?
Step15: That can be fixed with a traditional "is None" test.
Step16: Maybe a better name for this presentation would be 'this' or 'that'.
Step17: Zach reported that the "or" operator in Javascript works like Python.
|
162 | <ASSISTANT_TASK:>
Python Code:
# Get the parameters for Rprop of climin:
climin.Rprop?
class RProp(Optimizer):
# We want the optimizer to know some things in the Optimizer implementation:
def __init__(self, step_shrink=0.5, step_grow=1.2, min_step=1e-06, max_step=1, changes_max=0.1, *args, **kwargs):
super(RProp, self).__init__(*args, **kwargs)
self.opt_name = 'RProp (climin)'
self.step_shrink = step_shrink
self.step_grow = step_grow
self.min_step = min_step
self.max_step = max_step
self.changes_max = changes_max
def opt(self, x_init, f_fp=None, f=None, fp=None):
# We only need the gradient of the
assert not fp is None
# Do the optimization, giving previously stored parameters
opt = climin.rprop.Rprop(x_init, fp,
step_shrink=self.step_shrink, step_grow=self.step_grow,
min_step=self.min_step, max_step=self.max_step,
changes_max=self.changes_max)
# Get the optimized state and transform it into Paramz readable format by setting
# values on this object:
# Important ones are x_opt and status:
for info in opt:
if info['n_iter']>=self.max_iters:
self.x_opt = opt.wrt
self.status = 'maximum number of function evaluations exceeded'
break
m = GPy.examples.regression.toy_rbf_1d_50(optimize=False, plot=False)
m
m.plot()
m.optimize(RProp(), messages=1)
m
m.plot()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is all we need, GPy/Paramz will handle the rest for you
Step2: This is the model plot before optimization
Step3: And then the optimized state after running RProp
|
163 | <ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(X):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
# feature range [a, b]
a, b = 0.1, 0.9
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (b - a) + a
return X_scaled
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
print(os.getcwd())
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
features_count = 784
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), axis=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# TODO: Find the best parameters for each configuration
epochs = 1
batch_size = 50
learning_rate = 0.01
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
# TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3
epochs = 5
batch_size = 50
learning_rate = 0.5
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/mean_variance.png" style="height
Step6: Checkpoint
Step7: <img src="image/weight_biases.png" style="height
Step8: <img src="image/learn_rate_tune.png" style="height
Step9: Test
|
164 | <ASSISTANT_TASK:>
Python Code:
x = y = 7
print(x,y)
x = y = 7
print(id(x))
print(id(y))
from lolviz import *
callviz(varnames=['x','y'])
name = 'parrt'
userid = name # userid now points at the same memory as name
print(id(name))
print(id(userid))
you = [1,3,5]
me = [1,3,5]
print(id(you))
print(id(me))
callviz(varnames=['you','me'])
you = [1,3,5]
me = [1,3,5]
print(you, me)
you[0] = 99
print(you, me)
you = [1,3,5]
me = you
print(id(you))
print(id(me))
print(you, me)
callviz(varnames=['you','me'])
you[0] = 99
print(you, me)
callviz(varnames=['you','me'])
you = [1,3,5]
me = you
callviz(varnames=['you','me'])
me = [9,7,5] # doesn't affect `you` at all
print(you)
print(me)
callviz(varnames=['you','me'])
X = [[1,2],[3,4]]
Y = X.copy() # shallow copy
callviz(varnames=['X','Y'])
X[0][1] = 99
callviz(varnames=['X','Y'])
print(Y)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: But, did you know that they are both referring to the same 7 object? In other words, variables in Python are always references or pointers to data so the variables are not technically holding the value. Pointers are like phone numbers that "point at" phones but pointers themselves are not the phone itself.
Step2: Wow! They are the same. That number represents the memory location where Python has stored the shared 7 object.
Step3: Let's verify that the same thing happens for strings
Step4: Ok, great, so we are in fact sharing the same memory address to hold the string 'parrt' and both of the variable names point at that same shared space. We call this aliasing, in the language implementation business.
Step5: Those lists have the same value but live a different memory addresses. They are not aliased; they are not shared. Consequently, changing one does not change the other
Step6: On the other hand, let's see what happens if we make you and me share the same copy of the list (point at the same memory location)
Step7: Now, changing one appears to change the other, but in fact both simply refer to the same location in memory
Step8: Don't confuse changing the pointer to the list with changing the list elements
Step9: This aliasing of data happens a great deal when we pass lists or other data structures to functions. Passing list Quantity to a function whose argument is called data means that the two are aliased. We'll look at this in more detail in the "Visibility of symbols" section of Organizing your code with functions.
|
165 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.rc('xtick', labelsize=14)
plt.rc('ytick', labelsize=14)
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def plot_lines(df, subplots, title, xlabel, ylabel):
'''Generates one or more line plots from pandas dataframe'''
fig, ax = subplots
ax = df.plot.line(ax=ax)
ax.set_xlabel(xlabel, fontdict={'size' : 14})
ax.set_ylabel(ylabel, fontdict={'size' : 14})
ax.set_title(title, fontdict={'size' : 18})
ttl = ax.title
ttl.set_position([.5, 1.02])
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
trip_df = pd.read_csv('../input/all_trips.csv')
trip_df.columns = ['membership', 'bike_id', 'checkout_date', 'checkout_time', 'checkout_kiosk', 'checkin_kiosk', 'duration']
trip_df['checkout_kiosk'] = trip_df['checkout_kiosk'].replace('East 11th Street at Victory Grill', 'East 11th St. at Victory Grill')
trip_df['checkin_kiosk'] = trip_df['checkin_kiosk'].replace('East 11th Street at Victory Grill', 'East 11th St. at Victory Grill')
trip_df['checkout_kiosk'] = trip_df['checkout_kiosk'].replace('ACC - West & 12th Street', 'ACC - West & 12th')
trip_df['checkin_kiosk'] = trip_df['checkin_kiosk'].replace('ACC - West & 12th Street', 'ACC - West & 12th')
# Combine the date and time columns, use this as the index
def combine_date_time(df, new_col, date_col, time_col, set_index=True, drop=True):
'''Combines `date_col` and `time_col` into a single datetime column
INPUT: df - Dataframe to operate on
date_col - string name of the date column
time_col - string name of the time column
set_index - bool whether to set as index after combining
drop - bool whether to drop original columns afterwards
RETURNS: Transformed dataframe
'''
if new_col != df.index.name:
df[new_col] = pd.to_datetime(df[date_col] + ' ' + df[time_col])
# trip_df = trip_df.sort_values('datetime')
if set_index:
df = df.set_index(new_col, drop=True)
if drop:
df = df.drop([date_col, time_col], axis=1)
return df
trip_df = combine_date_time(trip_df, new_col='datetime', date_col='checkout_date', time_col='checkout_time')
print('Dataframe shape is {}'.format(trip_df.shape))
# print('Top rows:\n{}'.format(trip_df.head()))
# print('Bottom rows:\n{}'.format(trip_df.tail()))
print('Bikes dataframe date range from {} to {}'.format(trip_df.index[0], trip_df.index[-1]))
trip_df.resample('W').size().head()
plot_lines(trip_df.resample('W').size(), plt.subplots(1,1, figsize=(20,8)),
title='Weekly rentals', xlabel='', ylabel='Weekly rentals')
# Let's see how many different types of membership there are
memberships_df = trip_df.groupby('membership').size()
print('Found {} different memberships:\n'.format(memberships_df.shape[0]))
print(memberships_df)
def clean_memberships(df, col):
'''Cleans memberships by consolidating and converting to categories
INPUT: df - pandas Dataframe containing membership columns
col - column name to be consolidated
RETURNS: pandas DataFrame with consolidated memberships'''
# Replace the various memberships with their groupings
df[col] = df[col].astype(str)
df[col] = df[col].replace(['RideScout Single Ride','Try Before You Buy Special'], value='single')
df[col] = df[col].replace(['^24-Hour.*$', '^Explorer.*$', '^Walk Up$'], value='day', regex=True)
df[col] = df[col].replace(['^ACL Weekend Pass.*$', '^FunFunFun.*$', '^Weekender.*$'], value='weekend', regex=True)
df[col] = df[col].replace(['^7-Day.*$'], value='week', regex=True)
df[col] = df[col].replace(['^Local30.*$'], value='month', regex=True)
df[col] = df[col].replace(['^Semester.*$'], value='semester', regex=True)
df[col] = df[col].replace(['^Annual.*$', '^Local365.*$', 'Republic Rider.*$', '^Membership: pay once one-year.*$'], value='year', regex=True)
df[col] = df[col].replace(['^Founding Member.*$', '^.*Founder.*$'], value='triannual', regex=True)
# Drop the remaining trips (PROHIBITED and RESTRICTED)
drop_mask = (df['membership'] == 'PROHIBITED') | (df['membership'] == 'RESTRICTED')
df = df[~drop_mask]
# Finally convert to categorical
df[col] = df[col].astype('category')
return df
trip_df = clean_memberships(trip_df, 'membership')
print(trip_df.groupby('membership').size())
print(trip_df.info())
# Show histogram of trip duration by membership type
# g = sns.FacetGrid(trip_df, row='membership', sharey=False, sharex=False, margin_titles=True, size=4)
# g.map(plt.hist, 'duration')
trip_df
membership_order = ['single', 'day', 'weekend', 'week', 'month', 'semester', 'year', 'triannual']
def plot_boxplot(df, order, x, y, figsize, title, xlabel, ylabel):
'''Plots a boxplot using given '''
fig, ax = plt.subplots(1,1, figsize=figsize)
ax = sns.boxplot(data=df, x=x, y=y, order=order)
ax.set_xlabel(xlabel, fontdict={'size' : 14})
ax.set_ylabel(ylabel, fontdict={'size' : 14})
ax.set_title(title, fontdict={'size' : 18})
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
ttl = ax.title
ttl.set_position([.5, 1.02])
# Log-transform the durations to compress range
trip_df['log_duration'] = trip_df['duration'].apply(np.log10)
plot_boxplot(trip_df, order=membership_order, x='membership', y='log_duration', figsize=(20,10),
title='Trip duration by membership type', xlabel='Membership', ylabel='Trip duration (log10 minutes)')
# sns.boxplot(data=trip_df, x='membership', y='duration')
# g = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
# bins = np.linspace(0, 60, 13)
# g.map(plt.hist, "total_bill", color="steelblue", bins=bins, lw=0)
# Show some of the longest trips
trip_df.sort_values('duration', ascending=False).head(10)
# What are the stats for single memberships?
trip_df.loc[trip_df['membership'] == 'single', 'duration'].describe()
# todo - Add in the trips by membership type
plot_df = trip_df.copy()
plot_df['year'] = plot_df.index.year
plot_df = plot_df['2014-01-01':'2016-12-31'].groupby(['year', 'membership']).size().reset_index(name='count')
plot_df = plot_df.pivot_table(index='year', columns='membership', values='count')
plot_df = plot_df.fillna(0)
def plot_bar(df, size, title, xlabel, ylabel):
'''Plots a bar graph of the dataframe '''
palette = sns.color_palette('Set2', len(df.columns)) # Don't repeat colours
fig, ax = plt.subplots(1, 1, figsize=size)
ax = df.plot.bar(ax=ax, color=palette, rot=0)
ax.set_xlabel(xlabel, fontdict={'size' : 14})
ax.set_ylabel(ylabel, fontdict={'size' : 14})
ax.set_title(title, fontdict={'size' : 18})
ttl = ax.title
ttl.set_position([.5, 1.02])
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
ax.legend(fontsize = 14)
plot_bar(plot_df, (20,10), title='Trips by membership type and year', xlabel='Year', ylabel='Trip count')
# Find all the bike_id values that aren't numeric
print('Checking for non-numeric bike ids:')
text_row_mask = trip_df['bike_id'].str.contains('\D')
text_bikes_df = trip_df[text_row_mask]
bike_str_ids = text_bikes_df['bike_id'].unique()
print('Found non-numeric bike ids: {}'.format(bike_str_ids))
# Check how many `bike_id`s are non-numeric and drop them
n_rows = trip_df.shape[0]
n_str_rows = np.sum(text_row_mask)
pct_str_rows = (n_str_rows / n_rows) * 100.0
print('Total rows: {}\n'.format(n_rows))
print('Total non-numeric `bike_id` values: {} or {:.1f}% of rows\n'.format(n_str_rows, pct_str_rows))
print('Non-numeric `bike_id` counts: \n{}'.format(text_bikes_df.groupby('bike_id').size()))
if n_str_rows != 0:
trip_df = trip_df[~text_row_mask]
assert np.sum(trip_df['bike_id'].str.contains('\D')) == 0, 'Error - still non-numeric bike_ids left !'
print('Max bike ID is {}'.format(trip_df['bike_id'].max()))
trip_df['bike_id'] = trip_df['bike_id'].astype(np.uint16)
trip_df.head()
# Let's see how many times each of the bikes were rented out
def plot_hist(df_col, bins, size, title, xlabel, ylabel):
'''Plots a histogram of the dataframe column'''
fig, ax = plt.subplots(1, 1, figsize=size)
ax = df_col.plot.hist(ax=ax, bins=bins)
ax.set_xlabel(xlabel, fontdict={'size' : 14})
ax.set_ylabel(ylabel, fontdict={'size' : 14})
ax.set_title(title, fontdict={'size' : 18})
ttl = ax.title
ttl.set_position([.5, 1.02])
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
bikes_df = trip_df.groupby('bike_id').size()
plot_hist(bikes_df, bins=50, size=(20,10),
title='Trip Count by Bike',
xlabel='Number of trips per Bike',
ylabel='Number of bikes')
bikes_df.describe()
# Create a set of all the checkout and checkin kiosks.
# If the sizes are different we have some checkin or checkout only ones.
checkins = set(trip_df['checkin_kiosk'].unique())
checkouts = set(trip_df['checkout_kiosk'].unique())
print('Number of unique checkin kiosks: {}, checkout kiosks: {}'.format(len(checkins), len(checkouts)))
# Hmm ! Let's see which stations are in checkin and checkout, and which are in one only
def venn_stats(left, right, verbose=False, left_name=None, right_name=None):
'''Creates Venn Diagram stats for two sets, left and right
INPUTS: left = set of items on left
right = set of items on right
verbose = bool to print out set overlap and count info
left_name = string used if verbose == True. Describes left set
right_name = string used if verbose == True. Describes right set
'''
left_only = left - right
left_and_right = left & right
right_only = right - left
if verbose:
print('{} size = {}, {} size = {}'.format(left_name, len(left), right_name, len(right)))
print('\nIntersection of {} and {} ({}):\n{}'.format(left_name, right_name, len(left_and_right), left_and_right))
print('\n{}-only ({}):\n{}'.format(left_name, len(left_only), left_only))
print('\n{}-only ({}):\n{}'.format(right_name, len(right_only), right_only))
return (left_only, left_and_right, right_only)
venn_stations = venn_stats(checkouts, checkins,
verbose=True, left_name='checkout', right_name='checkin')
checkout_only_stations, common_stations, checkin_only_stations = venn_stations
# Let's deal with the checkin and checkout only stations
checkout_only_mask = trip_df['checkout_kiosk'].isin(checkout_only_stations)
checkout_only_count = np.sum(checkout_only_mask)
all_rows_count = trip_df.shape[0]
checkout_only_pct = (checkout_only_count / all_rows_count) * 100.0
checkout_only_df = trip_df[checkout_only_mask]
print('Number of checkout-only rows: {}, {:.4f}% of total'.format(checkout_only_count, checkout_only_pct))
if checkout_only_count > 0:
trip_df = trip_df[~checkout_only_mask]
print('Trips Dataframe shape is {}'.format(trip_df.shape))
checkout_only_df.groupby('checkout_kiosk').size()
checkin_only_mask = trip_df['checkin_kiosk'].isin(checkin_only_stations)
checkin_only_count = np.sum(checkin_only_mask)
all_rows_count = trip_df.shape[0]
checkin_only_pct = (checkin_only_count / all_rows_count) * 100.0
checkin_only_df = trip_df[checkin_only_mask]
print('Number of checkin-only rows: {}'.format(checkin_only_df.shape[0]))
if checkin_only_count > 0:
trip_df = trip_df[~checkin_only_mask]
print('Trips Dataframe shape is {}'.format(trip_df.shape))
checkin_only_df.groupby('checkin_kiosk').size()
# Now all stations are common in checkin and checkout
checkouts_df = trip_df.groupby('checkout_kiosk').size().to_frame('checkouts').reset_index()
checkins_df = trip_df.groupby('checkin_kiosk').size().to_frame('checkins').reset_index()
station_df = pd.merge(checkins_df, checkouts_df, left_on='checkin_kiosk', right_on='checkout_kiosk')
station_df = station_df.drop('checkout_kiosk', axis=1)
station_df.columns = ['name', 'checkins', 'checkouts']
station_df['total'] = station_df['checkins'] + station_df['checkouts']
station_df = station_df.sort_values('total', ascending=False).reset_index(drop=True)
station_df.head()
# Create a bar plot of the checkins and checkouts per station
def plot_bar(df, x, y, size, title, xlabel, ylabel):
'''Plots a bar-graph of dataframe column'''
fig, ax = plt.subplots(1, 1, figsize=size)
ax = df.plot.bar(ax=ax, x=x, y=y)
ax.set_xlabel(xlabel, fontdict={'size' : 14})
ax.set_ylabel(ylabel, fontdict={'size' : 14})
ax.set_title(title, fontdict={'size' : 18})
ttl = ax.title
ttl.set_position([.5, 1.02])
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
ax.legend(fontsize = 14)
plot_bar(station_df, x='name', y=['checkins', 'checkouts'], size=(20,10),
title='Checkins and checkouts by station',
xlabel='Station', ylabel='Checkins / Checkouts')
# Now let's print out all the stations, and drop ones which seem temporary
def remove_rows(df, col, words, verbose=False):
'''Removes rows containing words given
INPUT: df - dataframe
words - iterable containing words
RETURNS: Dataframe with rows containing `words` removed
'''
new_df = df.copy()
for word in words:
drop_mask = new_df[col].str.contains(word)
drop_count = np.sum(drop_mask)
drop_df = new_df[drop_mask]
print('Dropping {} rows containing {}:\n{}'.format(drop_count, word, drop_df[col]))
if (drop_count > 0):
new_df = new_df[~drop_mask]
return new_df.reset_index(drop=True)
station_df = remove_rows(station_df, 'name',
['MapJam', 'Mobile', 'Shop', 'Marketing', 'Re-branding',
'Customer Service', 'Main Office'],
verbose=True)
station_df = station_df.sort_values('name').reset_index(drop=True)
station_df['station_id'] = range(station_df.shape[0])
station_df['station_id'] = station_df['station_id'] + 1
station_df.head()
station_df = station_df.sort_values('total', ascending=False)
plot_bar(station_df, x='name', y=['checkins', 'checkouts'], size=(20,10),
title='Checkins and checkouts by station',
xlabel='Station', ylabel='Checkins / Checkouts')
import requests
import re
def parse_stations_html(url, verbose=False):
'''Parses an HTML file at url, returning a dictionary of matches
INPUT: url string to parse
RETURNS: dictionary with lat/lon key, and station info as value
'''
LAT_IDX = 0
LONG_IDX = 1
STAT_NAME = 0
STAT_ADDRESS = 1
STAT_BIKES = 2
STAT_DOCKS = 3
date_re = re.compile('.*stations_(\d{4}-\d{2}-\d{2}).*\.html')
time_re = re.compile('.*stations_\d{4}-\d{2}-\d{2}_(\d{2}:\d{2}:)\d{2}.*\.html')
# The `Convention Center / 4th St. @ MetroRail` station has a bug in the HTML.
station_re = re.compile('^var marker = new createMarker\(point, \"<div class=\'markerTitle\'>'
'<h3>(\w.*)</h3></div><div class=\'markerPublicText\'><.+></div>'
'<div class=\'markerAddress\'>(\w.*)</div><div class=\'markerAvail\'>'
'<div style=\'float: left; width: 50%\'><h3>(\d+)</h3>Bikes</div>'
'<div style=\'float: left; width: 50%\'><h3>(\d+)</h3>Docks</div></div>\".*$')
latlong_re = re.compile('var point = new google\.maps\.LatLng\((.+), (.+)\);')
# Dictionary to store stations
stations = dict()
r = requests.get(url, stream=True)
location_count = 0
station_count = 0
lat = -1
lon = -1
def parse_latlon(line, regex):
'''Helper function to parse latitude and longitude to tuple'''
match = regex.match(line)
if (match != None):
latitude = float(regex.match(line).groups()[LAT_IDX])
longitude = float(regex.match(line).groups()[LONG_IDX])
latlon = (latitude, longitude)
return latlon
return None
def parse_station(line, regex):
'''Helper function to parse station info into dict'''
match = regex.match(line)
if (match != None):
name = str(regex.match(line).groups()[STAT_NAME])
address = str(regex.match(line).groups()[STAT_ADDRESS].replace('<br />', ', '))
bikes = int(regex.match(line).groups()[STAT_BIKES])
docks = int(regex.match(line).groups()[STAT_DOCKS])
new_station = dict()
new_station['name'] = name
new_station['address'] = address
return new_station
return None
for line in r.iter_lines():
if line:
decoded_line = line.decode('utf-8')
latlon = parse_latlon(decoded_line, latlong_re)
if latlon is not None:
location_count += 1
lat, lon = latlon
if verbose:
print('Found location {}: lat {}, lon {}'.format(location_count, latlon[0], latlon[1]))
continue
new_station = parse_station(decoded_line, station_re)
if new_station is not None:
station_count += 1
new_station['lat'] = lat
new_station['lon'] = lon
stations[station_count] = new_station
if verbose:
print('Adding station #{}, {}'.format(station_count, new_station['name']))
assert station_count == location_count, 'Error - found {} locations, {} stations'.format(station_count, location_count)
return stations
web_stations = parse_stations_html('https://austin.bcycle.com/stations/station-locations', verbose=False)
web_stations_df = pd.DataFrame.from_dict(web_stations, orient='index')
web_stations_df = web_stations_df.reset_index()
web_stations_df = web_stations_df.rename(columns={'index' : 'station_id'})
web_stations_df = web_stations_df[['station_id', 'name', 'address', 'lat', 'lon']]
print('Current website contains {} stations. Top 6:'.format(web_stations_df.shape[0]))
web_stations_df.head()
# Create a dataframe of stations which are in the bike trips DF but not on website
all_stations_df = station_df.copy()
missing_stations_df = all_stations_df[~all_stations_df['name'].isin(web_stations_df['name'])]
missing_stations_df = missing_stations_df['name'].reset_index()
# Number the missing stations starting from the last web scraped one
start_index = web_stations_df.shape[0] + 1
stop_index = start_index + missing_stations_df.shape[0]
missing_station_ids = range(start_index, stop_index)
missing_stations_df['station_id'] = missing_station_ids
missing_stations_df = missing_stations_df[['station_id', 'name']]
print('There are {} missing stations'.format(missing_stations_df.shape[0]))
missing_stations_df
from pygeocoder import Geocoder
import re
def geocode(name, verbose=False, dry_run=False):
'''Tries to geocode a location, returns None if no matches
INPUT: name - string containing the location
RETURNS: tuple of (latitude, longitude) if successful, None if not
'''
name = re.sub('^ACC - ', '', name)
name = re.sub('^West & ', 'West Ave & ', name)
name = re.sub('at the \D.*$', '', name)
name = re.sub('^Convention Center/', '', name)
name = re.sub('^State Parking Garage @', '', name)
name = re.sub('Zilker Park West', 'Zilker Park', name)
for end in ('rd', 'st', 'th'):
name = re.sub(end + '$', end + ' Street', name)
name += ', Austin TX' # Add this on the end to help !
if dry_run:
print('Looking up {}'.format(name))
return name
try:
result = Geocoder.geocode(name)
if verbose:
print('Returned {} geocode as {}'.format(name, result.coordinates))
return result.coordinates
except Exception as e:
print('** Error finding geocode for {}'.format(name))
return None
missing_stations_df['latlon'] = missing_stations_df['name'].apply(lambda x: geocode(x, verbose=True))
missing_stations_df
import folium
def split_position(df, tuple_col, lat_col, lon_col, drop=True):
'''Splits a (latitude, longitude) tuple into separate columns
INPUT: df - dataframe to operate on
tuple_col - name of the (lat, lon) tuple column
lat_col - name of the new split latitude column
lon_col - name of the new split longitude column
RETURNS: Dataframe with new columns
'''
if tuple_col in df.columns:
df[lat_col] = df[tuple_col].apply(lambda x: x[0])
df[lon_col] = df[tuple_col].apply(lambda x: x[1])
if drop:
df = df.drop(tuple_col, axis=1)
return df
def map_plot(df, verbose=False):
'''Plots stations on a map from a dataframe'''
min_lat = df['lat'].min()
max_lat = df['lat'].max()
min_lon = df['lon'].min()
max_lon = df['lon'].max()
center_lat = min_lat + (max_lat - min_lat) / 2.0
center_lon = min_lon + (max_lon - min_lon) / 2.0
if verbose:
print('Plotting map centred at ({}, {})'.format(center_lat, center_lon))
# Plot map using the B&W Stamen Toner tiles centred on BCycle stations
map = folium.Map(location=(center_lat, center_lon), zoom_start=14,
tiles='Stamen Toner', control_scale=True)
# Add markers to the map for each station. Click on them to see their name
for station in df.iterrows():
stat=station[1]
folium.Marker([stat['lat'], stat['lon']], popup=stat['name'],
icon=folium.Icon(icon='info-sign')
).add_to(map)
map.save('stations.html')
return map
map_plot(split_position(missing_stations_df, 'latlon', 'lat', 'lon'))
def rev_geocode(latlon, verbose=False):
'''Tries to reverse geocode a latitude and longitude, returns None if no matches
INPUT: latlon - 2-tuple containing (latitude, longitude)
RETURNS: String with address if found
'''
try:
result = Geocoder.reverse_geocode(latlon[0], latlon[1])
if verbose:
print('Returned {} geocode as {}'.format(latlon, result))
return result
except Exception as e:
print('** Error finding geocode for {}: {}'.format(latlon, e))
return None
missing_stations_df['address'] = missing_stations_df['latlon'].apply(lambda x: rev_geocode(x, verbose=True))
missing_stations_df.head()
missing_stations_df = split_position(missing_stations_df, 'latlon', 'lat', 'lon')
missing_stations_df.head()
all_stations_df = pd.concat((web_stations_df, missing_stations_df), axis=0)
all_stations_df = all_stations_df.reset_index(drop=True)
print('All stations count: {}'.format(all_stations_df.shape[0]))
all_stations_df.head()
n_trips = trip_df.shape[0]
print('Before normalizing, bikes_df has {} rows'.format(n_trips))
stations = set(all_stations_df['name'])
print('{} stations in station table'.format(len(stations)))
def venn_stats_df(df, left_col, right_col, verbose=False):
'''Creates Venn Diagram stats for two sets, left and right
INPUTS: df - Dataframe with columns to check for overlaps
left_col = Dataframe column to use as left items
right_col = Dataframe column to use as right items
verbose = bool to print out set overlap and count info
'''
left = set(df[left_col].unique())
right = set(df[left_col].unique())
left_only = left - right
left_and_right = left & right
right_only = right - left
if verbose:
print('{} size = {}, {} size = {}'.format(left_col, len(left), right_col, len(right)))
print('\nIntersection of {} and {} ({}):\n{}'.format(left_col, right_col, len(left_and_right), left_and_right))
print('\n{}-only ({}):\n{}'.format(left_col, len(left_only), left_only))
print('\n{}-only ({}):\n{}'.format(right_col, len(right_only), right_only))
return (left_only, left_and_right, right_only)
l, m, r = venn_stats_df(trip_df, left_col='checkin_kiosk', right_col='checkout_kiosk', verbose='True')
bike_stations = m
l, m, r = venn_stats(bike_stations, stations, left_name='bike_stations', right_name='station_table', verbose=True)
bike_stations_only = l
bike_stations_only_checkin_mask = trip_df['checkin_kiosk'].isin(bike_stations_only)
bike_stations_only_checkout_mask = trip_df['checkout_kiosk'].isin(bike_stations_only)
bike_stations_only_mask = bike_stations_only_checkin_mask | bike_stations_only_checkout_mask
bike_stations_only_count = np.sum(bike_stations_only_mask)
n_dropped_trips = n_trips - bike_stations_only_count
print('Pre-normalize row count: {}, post-normalize: {}'.format(n_trips, n_dropped_trips))
norm_trip_df = pd.merge(trip_df.reset_index(), all_stations_df[['name', 'station_id']], left_on='checkout_kiosk', right_on='name')
norm_trip_df = pd.merge(norm_trip_df, all_stations_df[['name', 'station_id']], left_on='checkin_kiosk', right_on='name')
norm_trip_df = norm_trip_df[['datetime', 'membership', 'bike_id', 'station_id_x', 'station_id_y', 'duration']]
norm_trip_df = norm_trip_df.rename(columns={'station_id_x' : 'checkout_id', 'station_id_y' : 'checkin_id'})
norm_trip_df = norm_trip_df.sort_values('datetime')
norm_trip_df = norm_trip_df.set_index('datetime', drop=True)
# norm_trip_df
print('After normalizing, bikes_df has {} rows'.format(norm_trip_df.shape[0]))
print('\nNull columns report:\n{}'.format(norm_trip_df.isnull().sum()))
# Save out the trips and stations dataframe
norm_trip_df.to_csv('../input/all_trips_clean.csv')
all_stations_df.to_csv('../input/all_stations_clean.csv', index=False)
norm_trip_df.info()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper functions
Step2: Load raw trip data
Step3: Processing Time and Date
Step4: Plotting weekly rentals
Step5: The rentals show that over the period of 3 years, the amount of rentals is increasing slightly, with 2014 rentals averaging around 3000 per week, 2015 is just under 4000, and 2016 is over 4000. There are also monthly variations, presumably due to the weather.
Step6: We'll use some regexes to combine the categories, to make it more readable I included '...' below to show it matches any following characters.
Step7: Visualizing membership types
Step8: Wow ! There are some huge ranges of trip durations here. Even after applying a log10 transformation to the values, there are still many outliers above the third quartile. This shows a heavily right-skewed distribution. There are also a lot of trips that are very short (0 in log10 units is 1 minute). These will need cleaning up.
Step9: There are some common patterns to the ultra-long rentals. A lot of them have Stolen or Missing as their checkin kiosk, which shows they're not really trips but a way of recording theft. There are also some Repair Shop, Shop, and Main Office trips which are probably a bike needing maintenance.
Step10: The single membership trip durations are very heavily right skewed. 25% of the trips are 1 minute or less (!) - either people with single memberships are really fast at cycling, or they're just taking a very short trip.
Step11: This plot contains quite a bit of information. Remember this is the count of trips by membership type, not the amount of memberships that were sold of each type.
Step12: The histogram above shows that the most common number of trips for a given bike is around 1427. The distribution of trips per bike is left-skewed, with the Inter-Quartile-Range from 1264 to 1534 trips per bike. These bikes must be well maintained to make well over a thousand trips per bike !
Step13: Checkout-only stations
Step14: The stations above are only in the checkout_kiosk column, and never in the checkin_kiosk column. There are only 8 rows, which all leave from 'Fantasy Zilker' in the first 2 weekends of October 2016. I suspect this might be a special event held in Zilker Part. As there are only 8 of these rows, we can drop them from the dataset.
Step15: There are only 69 checkin-only rows, from one of the three types
Step16: This is a very busy plot, and too condensed to easily read the station names. But you can see the rough distribution of checkins and checkouts from the busiest stations (City Hall, Riverside @ S. Lamar, 2nd & Congress, etc) down to the quietest stations ('Marketing Event', 'Mobile Station @ Boardwalk Opening', 'Re-branding'). As you get to the quieter stations, they seem to be temporary stations at events, or codenames for maintenance on the bikes
Step17: Re-ploting checkins and checkouts by station (after dropping invalid stations)
Step18: Adding metadata to stations dataframe
Step19: Looking up remaining station locations
Step20: Checking geocodes on a map
Step21: Reverse geocoding to find address from latitude and longitude
Step22: Splitting latitude and longitude, recombining into web_stations
Step23: Combining stations back together into single table
Step24: Normalizing bikes and trips into separate tables
Step25: Final table splitting and replacement with station_id
|
166 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
! cargo run --release --example clv > clv.csv
df = np.arccos(pd.read_csv("clv.csv"))
for col in df.columns:
plt.figure()
plt.title(col)
df[col].hist(bins=100)
plt.xlim(0, np.pi)
plt.yscale("log")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A sample script for calculate CLV of Lorenz 63 model is placed at examples/clv.rs
Step2: Tangency of CLVs
|
167 | <ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Barachant <alexandre.barachant@gmail.com>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import Epochs
from mne.decoding import SPoC
from mne.datasets.fieldtrip_cmc import data_path
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
from sklearn.model_selection import KFold, cross_val_predict
# Define parameters
fname = data_path() + '/SubjectCMC.ds'
raw = mne.io.read_raw_ctf(fname)
raw.crop(50., 250.).load_data() # crop for memory purposes
# Filter muscular activity to only keep high frequencies
emg = raw.copy().pick_channels(['EMGlft'])
emg.filter(20., None, fir_design='firwin')
# Filter MEG data to focus on beta band
raw.pick_types(meg=True, ref_meg=True, eeg=False, eog=False)
raw.filter(15., 30., fir_design='firwin')
# Build epochs as sliding windows over the continuous raw file
events = mne.make_fixed_length_events(raw, id=1, duration=.250)
# Epoch length is 1.5 second
meg_epochs = Epochs(raw, events, tmin=0., tmax=1.500, baseline=None,
detrend=1, decim=8)
emg_epochs = Epochs(emg, events, tmin=0., tmax=1.500, baseline=None)
# Prepare classification
X = meg_epochs.get_data()
y = emg_epochs.get_data().var(axis=2)[:, 0] # target is EMG power
# Classification pipeline with SPoC spatial filtering and Ridge Regression
spoc = SPoC(n_components=2, log=True, reg='oas', rank='full')
clf = make_pipeline(spoc, Ridge())
# Define a two fold cross-validation
cv = KFold(n_splits=2, shuffle=False)
# Run cross validaton
y_preds = cross_val_predict(clf, X, y, cv=cv)
# Plot the True EMG power and the EMG power predicted from MEG data
fig, ax = plt.subplots(1, 1, figsize=[10, 4])
times = raw.times[meg_epochs.events[:, 0] - raw.first_samp]
ax.plot(times, y_preds, color='b', label='Predicted EMG')
ax.plot(times, y, color='r', label='True EMG')
ax.set_xlabel('Time (s)')
ax.set_ylabel('EMG Power')
ax.set_title('SPoC MEG Predictions')
plt.legend()
mne.viz.tight_layout()
plt.show()
spoc.fit(X, y)
spoc.plot_patterns(meg_epochs.info)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the contributions to the detected components (i.e., the forward model)
|
168 | <ASSISTANT_TASK:>
Python Code:
!head -n12 $LISA_HOME/logging.conf
!head -n30 $LISA_HOME/logging.conf | tail -n5
import logging
from conf import LisaLogging
LisaLogging.setup(level=logging.INFO)
from env import TestEnv
te = TestEnv({
'platform' : 'linux',
'board' : 'juno',
'host' : '10.1.210.45',
'username' : 'root'
})
target = te.target
tests_conf = {
"confs" : [
{
"tag" : "base",
"flags" : "ftrace",
"sched_features" : "NO_ENERGY_AWARE",
"cpufreq" : {
"governor" : "performance",
},
"files" : {
'/proc/sys/kernel/sched_is_big_little' : '0',
'!/proc/sys/kernel/sched_migration_cost_ns' : '500000'
},
}
]
}
from trace import Trace
import json
with open('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/platform.json', 'r') as fh:
platform = json.load(fh)
trace = Trace(platform,
'/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/trace.dat',
events=['sched_switch']
)
logging.info("%d tasks loaded from trace", len(trace.getTasks()))
logging.info("The rt-app task in this trace has these PIDs:")
logging.info(" %s", trace.getTasks()['rt-app'])
!cat $LISA_HOME/libs/utils/platforms/pixel.json
from env import TestEnv
te = TestEnv({
'platform' : 'android',
'board' : 'pixel',
'ANDROID_HOME' : '/home/patbel01/Code/lisa/tools/android-sdk-linux/'
}, force_new=True)
target = te.target
!tree -L 1 ~/Code/lisa/ipynb
!tree -L 1 ~/Code/lisa/ipynb/examples
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each module has a unique name which can be used to assign a priority level for messages generated by that module.
Step2: The default logging level for a notebook can also be easily configured using this few lines
Step3: Removed Juno/Juno2 distinction
Step4: Executor Module
Step5: can be used to run a test where the platform is configured to
Step6: Android Support
Step7: Added UiBench workload
Step8: This folder is configured to be ignored by git, thus it's the best place to place your work-in-progress notebooks.
|
169 | <ASSISTANT_TASK:>
Python Code:
# import and check version
import tensorflow as tf
# tf can be really verbose
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
# a small sanity check, does tf seem to work ok?
hello = tf.constant('Hello TF!')
sess = tf.Session()
print(sess.run(hello))
sess.close()
a = tf.constant(3.0, dtype=tf.float32) # special type of tensor
b = tf.constant(4.0) # also tf.float32 implicitly
total = a + b
print(a)
print(b)
print(total)
# types need to match
try:
tf.constant(3.0, dtype=tf.float32) + tf.constant(4, dtype=tf.int32)
except TypeError as te:
print(te)
# https://www.tensorflow.org/api_docs/python/tf/dtypes/cast
a = tf.constant(3, dtype=tf.int32)
b = tf.cast(tf.constant(4.0, dtype=tf.float32), tf.int32)
int_total = a + b
int_total
# sessions need to be closed in order not to leak ressources, this makes sure close is called in any case
with tf.Session() as sess:
print(sess.run(total))
# print(sess.run(int_total))
# let's see what compute devices we have available, hopefully a GPU
# if you do not see it, switch on under Runtime->Change runtime type
with tf.Session() as sess:
devices = sess.list_devices()
for d in devices:
print(d.name)
tf.test.gpu_device_name()
# GPU requires nvidia cuda
tf.test.is_built_with_cuda()
with tf.device("/device:XLA_CPU:0"):
with tf.Session() as sess:
print(sess.run(total))
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
z = x + y
with tf.Session() as sess:
try:
print(sess.run(z))
except tf.errors.InvalidArgumentError as iae:
print(iae.message)
with tf.Session() as sess:
print(sess.run(z, feed_dict={x: 3.0, y: 4.5}))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First define a computational graph composed of operations and tensors
Step2: Then use a session to execute the graph
Step3: Graphs can be executed on CPU, GPU, and even TPU
Step4: Feeding data to a graph
|
170 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from pymc3 import Model, Normal, Lognormal, Uniform, trace_to_dataframe, df_summary
data = pd.read_csv('/5studies.csv')
data.head()
plt.figure(figsize =(10,10))
for study in data.Study.unique():
cols = ['red', 'black', 'blue', 'brown', 'green']
x = data.Dose[data.Study ==study]
y = data.Mean_response[data.Study ==study]
col = max(data.Study)
plt.scatter(x, y, c=cols[study-1])
plt.plot(x,y, c=cols[study-1])
plt.xlabel('Dose')
plt.ylabel('Mean_respnse')
mean_response = np.array(data.Mean_response)
dose = np.array(data.Dose)
# Since we are interested in modelling the inter study variation
# we must create some variables to pass into the model parameters
# How many studies...
n_studies = len(data.Study.unique())
# An array that is used to index the studies, reduced by -1 as the index starts at 0 not 1
study = np.array(data.Study.values-1)
# array to adjust sigma for sample size
n= np.array(data.n)
pkpd_model = Model()
with pkpd_model:
# Hyperparameter Priors
# for the uniform values, as they are passed in to a Lognormal distribution
# as the spread variable, they reflect a logged value, so upper =4 is equivalent to
# tau = 10000
mu_e0 = Normal('mu_e0', mu=0, sd=100)
omega_e0 = Uniform('omega_e0', lower=0, upper =4)
mu_emax = Normal('mu_emax', mu=0, sd=100)
omega_emax = Uniform('omega_emax', lower=0, upper=4)
# Note how the n_studies variable is passed in with the shape argument
# for e0 and emax
e0 = Lognormal('e0', mu = mu_e0, tau= omega_e0, shape=n_studies)
emax= Lognormal('emax', mu = mu_emax, tau = omega_emax, shape=n_studies)
ed50 = Lognormal('ed50', mu=0, tau=4)
# Normalise sigma for sample size
sigma = np.sqrt(np.square(Uniform('sigma', lower = 0, upper = 10000 ))/n)
# Expected value of outcome
# Note how the study index variable is applied with e0 and emax
resp_median = np.log(e0[study] + (emax[study]*dose)/(ed50+dose))
# Likelihood (sampling distribution) of observations and
resp = Lognormal('resp', mu=resp_median, tau =sigma, observed =mean_response)
resp_pred = Lognormal('resp_pred', mu=resp_median, tau =sigma, shape =len(dose))
import scipy
from pymc3 import find_MAP, NUTS, sample
with pkpd_model:
# obtain starting values via MAP
start = find_MAP(fmin=scipy.optimize.fmin_powell)
# draw 2000 posterior samples
trace = sample(2000, start=start)
from pymc3 import traceplot
t =traceplot(trace, lines={k: v['mean'] for k, v in df_summary(trace).iterrows()})
t_df = trace_to_dataframe(trace)
filter_col = [col for col in list(t_df) if col.startswith('resp_pred__')]
col= pd.DataFrame()
to_col =pd.DataFrame()
for n, cols in enumerate(filter_col):
to_col['resp_pred']=t_df[cols]
to_col['dose'] = dose[n]
col = pd.concat([col, to_col])
plt.figure(figsize=(6,6))
plt.scatter(col['dose'], col['resp_pred'], alpha =0.02, s= 15 ,color ='grey')
plt.scatter(data.Dose, data.Mean_response, alpha =1, color='red')
means = col.groupby('dose', as_index=False).aggregate(np.mean)
plt.plot(means.dose, means.resp_pred)
plt.axis([-10, 100, 0, 15])
col= np.empty([1,5])
for n, cols in enumerate(filter_col):
a = study[n]+1
b = dose[n]
c = t_df[cols].quantile(q=0.5)
d = t_df[cols].quantile(q=0.95)
e = t_df[cols].quantile(q=0.05)
f = np.array([a,b,c,d,e]).reshape(1,5)
col = np.concatenate((col,f))
col = np.delete(col, (0), axis=0)
col = pd.DataFrame(col, columns=['study', 'dose', 'mean', 'max', 'min'])
col = col.sort_index(by=['study'])
col.head()
effect= sns.FacetGrid(col, col="study",hue ="study" ,col_wrap=3, size=3, sharex=True)
effect.map(plt.plot, "dose", "mean", marker="o", ms=4)
effect.map(plt.plot, "dose", "max", linestyle ='--')
effect.map(plt.plot, "dose", "min", linestyle ='--')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in data and view
Step2: Manipulate data
Step3: Now build the model
Step4: Initiate the Bayesian sampling
Step5: Plot the traces and take a look
Step6: Plot the predicted values from trace on top of the original data
Step7: Create dataframe to plot each study separately
Step8: And now plot individual studies using seaborn
|
171 | <ASSISTANT_TASK:>
Python Code:
# Copyright 2022 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pandas as pd
import functools
import bokeh.io
!pip install pandas_bokeh
import pandas_bokeh
import requests
import json
import numpy as np
bokeh.io.output_notebook()
!wget --quiet --no-check-certificate https://storage.googleapis.com/gresearch/mucped22/evaluations.json
with open('evaluations.json') as f:
data = pd.DataFrame(json.load(f))
data
data['complete_crop'] = data.apply(lambda row: row.crop[0] + row.crop[2] <= row.image_dims[0] and row.crop[1] + row.crop[3] <= row.image_dims[1], axis=1)
data['worst_elo'] = data.apply(lambda row: row.greater_elo if row.greater_elo > row.lesser_elo else row.lesser_elo, axis=1)
data = data[(data.rater_flips > 2) & (data.rater_time_ms > 3000) & (data.complete_crop == True)]
data
def strip(ary, n):
def stripfun(sum, el):
sum[el] = el[n:]
return sum
return functools.reduce(stripfun, ary, {})
greater_metric_cols = list(filter(lambda el: el.startswith('greater_') and not el.endswith('_file'), list(data.columns)))
lesser_metric_cols = list(filter(lambda el: el.startswith('lesser_') and not el.endswith('_file'), list(data.columns)))
greater_metrics = data[greater_metric_cols]
greater_metrics = greater_metrics.rename(columns=strip(greater_metric_cols, 8))
lesser_metrics = data[lesser_metric_cols]
lesser_metrics = lesser_metrics.rename(columns=strip(lesser_metric_cols, 7))
metrics = pd.concat([greater_metrics, lesser_metrics])
metrics = metrics.sort_values('elo').reset_index(drop=True)
metrics
corrs = metrics.corr(method='spearman')
corrs
metric_cols = list(map(lambda name: name[7:], lesser_metric_cols))
metric_cols.remove('elo')
def rollingcorr(df, method, window_size, step_size):
res = []
for start in range(0, df.shape[0] - window_size, step_size):
window = df[start:start+window_size]
row = [window.iloc[-1]['elo']]
for metric_name in metric_cols:
row.append(np.abs(window[metric_name].corr(window['elo'], method=method)))
res.append(row)
return pd.DataFrame(res, dtype=np.float, columns=['elo'] + list(map(lambda name: f"{name}", metric_cols)))
rollingcorr(metrics, 'spearman', 5000, 1000).plot_bokeh(x='elo', figsize=(1400, 400))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Human evaluation of visual metrics
Step2: First download the dataset containing all evaluations.
Step3: Then decorate it with whether the crop settings were actually compatible with the image size (a few, ~15, evaluations have this bug), and the worst ELO of both distortions.
Step4: To allow a rank correlation, like Spearman, combine the metrics of the worse distortion (lesser), and the better distortion (greater), into one dataframe. To also allow comparing correlation in different regions of quality, sort by ELO score.
Step5: Then compute the correlation matrix for these, using Spearman's rank correlation coeffient.
Step6: Plot the correlation in a rolling window of 5000 evaluations with a step of 1000 evaluations for each metric, to see how they behave across a range of ELO scores.
|
172 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from __future__ import division
import nltk
g1 =
S -> NP VP
NP -> Det N | Det N PP | 'I'
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
grammar1 = nltk.CFG.fromstring(g1)
analyzer = nltk.ChartParser(grammar1)
oracion = "I shot an elephant in my pajamas".split()
# guardamos todos los posibles análisis sintácticos en trees
trees = analyzer.parse(oracion)
for tree in trees:
print(tree)
print(analyzer.parse_one(oracion))
print(analyzer.parse(oracion))
g1v2 =
S -> NP VP
NP -> Det N | Det N PP | PRO
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
PRO -> 'I' | 'you'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
grammar1v2 = nltk.CFG.fromstring(g1v2)
analyzer1v2 = nltk.ChartParser(grammar1v2)
# itero sobre la estructura que devuelve parse()
for tree in analyzer1v2.parse(oracion):
print(tree)
print("\n", "-------------------------------", "\n")
for tree in analyzer1v2.parse("you shot my elephant".split()):
print(tree)
for tree in analyzer.parse("shot an pajamas elephant my I".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
for tree in analyzer.parse("our time is running out".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
g2 = u
O -> SN SV
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
grammar2 = nltk.CFG.fromstring(g2)
analizador2 = nltk.ChartParser(grammar2)
oraciones = uAna salta
la niña pela una manzana verde con el cuchillo
Juan come un cuchillo bonito desde el niño
un manzana bonito salta el cuchillo desde el niño verde
el cuchillo verde persigue a la pequeña manzana de Ana
el cuchillo verde persigue a Ana.split("\n")
for oracion in oraciones:
print(oracion)
for tree in analizador2.parse(oracion.split()):
print(tree, "\n")
g3 = u
O -> SN SV | O Conj O
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
Conj -> 'y' | 'pero'
# Ahora fijate cómo creamos en analizador en un solo paso
# compáralo con los ejemplos anteriores
analizador3 = nltk.ChartParser(nltk.CFG.fromstring(g3))
for tree in analizador3.parse(ula manzana salta y el niño come pero el cuchillo
verde persigue a la pequeña manzana de Ana.split()):
print(tree)
# ojo, son sencillas, pero contienen oraciones impersonales, verbos copulativos, sujetos elípticos
oraciones = umañana es viernes
hoy es jueves
tenéis sueño
hace frío
Pepe hace sueño.split("\n")
# escribe tu gramática en esta celda
g4 =
analyzer4 = nltk.ChartParser(nltk.CFG.fromtring(g4))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer4.parse(oracion.split()):
print(tree, "\n")
oraciones = uPepe cree que mañana es viernes
María dice que Pepe cree que mañana es viernes.split()
# escribe la extensión de tu gramática en esta celda
g5 =
analyzer5 = nltk.ChartParser(nltk.CFG.fromstring(g5))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer5.parse(oracion.split()):
print(tree, "\n")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Gramáticas Independientes del Contexto (CFG)
Step3: Fíjate cómo hemos definido nuestra gramática
Step4: Con el objeto grammar1 ya creado, creamos el analizador con el método nltk.ChatParser.
Step5: Una vez creado nuestro analizador ya lo podemos utilizar. Tenemos a nuestro alcance el método .parse para analizar sintácticamente cualquier oración que se especifique como una cadena de palabras. Nuestra gramática es bastante limitada, pero podemos utilizarla para analizar la oración I shot an elephant in my pajamas. Si imprimimos el resultado del método, obtenemos el árbol sintáctico.
Step6: Por si no te has dado cuenta, la oración I shot an elephant in my pajamas es ambigua en inglés
Step7: Recuerda que para imprimir el árbol sintáctico hay que iterar (con un bucle for, por ejemplo) sobre el objeto que devuelve el método parse() y utilizar la función print.
Step9: A continuación modifico ligeramente mi gramática g1 para incluir una nueva categoría gramatical PRO y añadir algo de volcabulario nuevo. Compara ambos ejemplos
Step10: NOTA IMPORTANTE sobre errores y el comportamiento de parse()
Step11: Sin embargo, cuando el analizador no reconoce todo el vocabulario (porque utilizamos una palabra no definida dentro del léxico), el método parse() falla y muestra un mensaje de error de tipo ValueError como el siguiente. Fíjate solo en la última línea
Step13: Tenlo en cuenta a la hora de detectar errores en tu código.
Step15: Vamos a probar si es capaz de analizar distintas oraciones es español. Para hacerlo más divertido, vamos a guardar varias oraciones separadas por un intro (simbolizado por el metacarácter \n) en una lista de cadenas llamda oraciones. Iteramos sobre esas oraciones, las imprimimos, después las rompemos en listas de palabras (con el método .split()) e imprimimos el resultado de analizarlas con nuestro analizador.
Step18: Vamos a aumentar la cobertura de nuestra gramática de modo que sea capaz de reconocer y analizar oraciones coordinadas. Para ello, modificamos la regla en la que definimos la oración añadiendo una definición recursivaque defina oración como la secuencia de una oración (O) seguida de una conjunción (Conj) y de otra oración (O). Por último añadimos también algo de léxico nuevo
Step21: Recuerda que una gramática no es un programa
Step24: ¿Podemos extender g4 para que reconozca oraciones subordinadas introducidas con verbos de lengua o de pensamiento. Me refiero a oraciones del tipo
|
173 | <ASSISTANT_TASK:>
Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del x
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del x
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del x
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Step2: Y Phosphorylation
Step3: T Phosphorylation
|
174 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from pathlib import Path
from scipy.stats import linregress
dir_ = r'C:\Data\Antonio\data\8-spot 5samples data\2013-05-15/'
filenames = [str(f) for f in Path(dir_).glob('*.hdf5')]
filenames
keys = [f.stem.split('_')[0] for f in Path(dir_).glob('*.hdf5')]
keys
filenames_dict = {k: v.stem for k, v in zip(keys, Path(dir_).glob('*.hdf5'))}
filenames_dict
def _filename_fit(idx, method, window, step):
return 'results/%s_%sfit_ampl_only__window%ds_step%ds.txt' % (filenames_dict[idx], method, window, step)
def _filename_nb(idx, window, step):
return 'results/%s_burst_data_vs_time__window%ds_step%ds.txt' % (filenames_dict[idx], window, step)
def process(meas_id):
methods = ['em', 'll', 'hist']
fig_width = 14
fs = 18
def savefig(title, **kwargs):
plt.savefig("figures/Meas%s %s" % (meas_id, title))
bursts = pd.DataFrame.from_csv(_filename_nb(meas_id, window=30, step=1))
nbm = bursts.num_bursts.mean()
nbc = bursts.num_bursts_detrend
print("Number of bursts (detrended): %7.1f MEAN, %7.1f VAR, %6.3f VAR/MEAN" %
(nbm, nbc.var(), nbc.var()/nbm))
fig, ax = plt.subplots(figsize=(fig_width, 3))
ax.plot(bursts.tstart, bursts.num_bursts)
ax.plot(bursts.tstart, bursts.num_bursts_linregress, 'r')
title = 'Number of bursts - Full measurement'
ax.set_title(title, fontsize=fs)
savefig(title)
fig, ax = plt.subplots(figsize=(fig_width, 3))
ax.plot(bursts.tstart, bursts.num_bursts_detrend)
ax.axhline(nbm, color='r')
title = 'Number of bursts (detrended) - Full measurement'
ax.set_title(title, fontsize=fs)
savefig(title)
params = {}
for window in (5, 30):
for method in methods:
p = pd.DataFrame.from_csv(_filename_fit(meas_id, method=method,
window=window, step=1))
params[method, window, 1] = p
meth = 'em'
fig, ax = plt.subplots(figsize=(fig_width, 3))
ax.plot('kinetics', data=params[meth, 5, 1], marker='h', lw=0, color='gray', alpha=0.2)
ax.plot('kinetics', data=params[meth, 30, 1], marker='h', lw=0, alpha=0.5)
ax.plot('kinetics_linregress', data=params[meth, 30, 1], color='r')
title = 'Population fraction - Full measurement'
ax.set_title(title, fontsize=fs)
savefig(title)
px = params
print('Kinetics 30s: %.3f STD, %.3f STD detrended.' %
((100*px[meth, 30, 1].kinetics).std(),
(100*px[meth, 30, 1].kinetics_linregress).std()))
process(meas_id = '7d')
process(meas_id = '12d')
process(meas_id = '17d')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Measurement 0
Step2: Measurement 1
Step3: Measurement 2
|
175 | <ASSISTANT_TASK:>
Python Code:
science_image_path_g = 'data/seo_m66_g-band_180s_apagul_1.fits' #Type the path to your image
sci_g = fits.open(science_image_path_g)
sci_im_g = fits.open(science_image_path_g)[0].data
plt.imshow(sci_im_g,cmap='gray', vmax=1800, norm=matplotlib.colors.LogNorm())
plt.colorbar()
dark_image_path='data/dark.fits' #Type the path to your dark image
drk_im = fits.open(dark_image_path)[0].data
plt.imshow(drk_im,cmap='gray', vmax=2000)
plt.colorbar()
bias_image_path = 'data/bias.fits' #Type the path to your bias image
bias_image = fits.open(bias_image_path)[0].data
plt.imshow(bias_image, cmap='gray')
plt.colorbar()
plt.hist(drk_im.flatten());
plt.yscale('log')
plt.xlabel('Output counts')
plt.ylabel('Number of pixels')
flat_image_path = 'data/FLAT_g-band_2016-10-06_bin1_id5908.fits' #Type the path to your flat image here
flat_image = fits.open(flat_image_path)[0].data
#You can try cmap='hot' or cmap='jet' to see how it changes
plt.imshow(flat_image, cmap='gray')
plt.colorbar()
plt.hist(flat_image.flatten())
def reduce_image(sci_im,drk_im,flat_im, bias_im, filter_dark=True):
from scipy.stats import mode
dkr_im = drk_im - bias_im
#First part: We take "zero" the image
#The next part is optional and averages the dark image in a 10 pixel radius
#to get rid of salt/pepper noise
if(filter_dark):
selem = disk(10) #We are going to perform averages in 10 pixel radius disks
selem2 = disk(4)
drk_im = rank.mean(drk_im, selem=selem) #We perform an average to remove salt-pepper noise
flat_im = rank.mean(flat_im, selem=selem2)
#Second part: Make every part have the same sensitivity
#flat_im = (flat_im - drk_im)/mode(flat_im-drk_im,axis=None)[0] #most common pixel value will equal 1
flat_im = (flat_im - drk_im)/np.median(flat_im-drk_im)
#Lower than 1 where the CCD is less sensitive and more than 1 where it's more sensitive
sci_im = (sci_im -drk_im)/flat_im
#Error image
return sci_im
new_sci_image_g = reduce_image(sci_im_g,drk_im,flat_image,bias_image, filter_dark=False)
plt.imshow(new_sci_image_g, cmap='gray', vmax=4000, vmin=50, norm=matplotlib.colors.LogNorm())
plt.colorbar()
fig, ax = plt.subplots(nrows=1,ncols=3,figsize=(10,8))
ax[0].imshow(sci_im_g,cmap='gray',vmax=1800, norm=matplotlib.colors.LogNorm())
ax[0].set_title('Before reduction')
ax[1].imshow(new_sci_image_g,cmap='gray',vmax=2000, vmin=50, norm=matplotlib.colors.LogNorm())
ax[1].set_title('After reduction')
ax[2].imshow(sci_im_g-new_sci_image_g,cmap='gray', vmax=1050, vmin=1000)
ax[2].set_title('Difference')
science_image_path_r = 'data/seo_m66_r_180s_apagul_1.fits'
sci_im_r = fits.open(science_image_path_r)[0].data
science_image_path_i = 'data/seo_m66_i-band_180s_apagul_1.fits'
sci_im_i = fits.open(science_image_path_i)[0].data
flat_r = fits.open('data/FLAT_r-band_2016-10-06_bin1_id5906.fits')[0].data
flat_i = fits.open('data/FLAT_i-band_2016-10-06_bin1_id5907.fits')[0].data
new_sci_image_r = reduce_image(sci_im_r,drk_im,flat_r,bias_image)
new_sci_image_i = reduce_image(sci_im_i,drk_im,flat_i,bias_image)
# Read in the three images downloaded from here:
# g: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-g-001737-5-0039.fits.bz2
# r: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-r-001737-5-0039.fits.bz2
# i: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-i-001737-5-0039.fits.bz2
g = fits.open('data/frame-g-001737-5-0039.fits.bz2')[0]
r = fits.open('data/frame-r-001737-5-0039.fits.bz2')[0]
i = fits.open('data/frame-i-001737-5-0039.fits.bz2')[0]
# remap r and i onto g
r_new, r_mask = reproject_interp(r, g.header)
i_new, i_mask = reproject_interp(i, g.header)
# zero out the unmapped values
i_new[np.logical_not(i_mask)] = 0
r_new[np.logical_not(r_mask)] = 0
# red=i, green=r, blue=g
# make a file with the default scaling
rgb_default = make_lupton_rgb(i_new, r_new, g.data, filename="ngc6976-default.jpeg")
# this scaling is very similar to the one used in Lupton et al. (2004)
rgb = make_lupton_rgb(i_new, r_new, g.data, Q=10, stretch=0.5, filename="ngc6976.jpeg")
plt.imshow(rgb)
positions = [(550., 600.), (450., 500.)] #Change it and include the position of an object in your image
apertures = CircularAperture(positions, r=20.)
phot_table = aperture_photometry(new_sci_image_g, apertures)
print phot_table
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This image is not science-ready yet...
Step2: Why is this?
Step3: Let's create a better image!
Step4: Compare to the original!
Step5: Reduce the rest of images (in principle we should take a different bias image for each filter) because the CCD has different sensitivity at different wavelengths
Step6: An example from SDSS
Step7: If you want to know more about Jupyter
|
176 | <ASSISTANT_TASK:>
Python Code:
# Version for this notebook
!pip list | grep mecabwrap
from mecabwrap import tokenize, print_token
for token in tokenize('すもももももももものうち'):
print_token(token)
token
from mecabwrap import do_mecab
out = do_mecab('人生楽ありゃ苦もあるさ', '-Owakati')
print(out)
from mecabwrap import do_mecab_vec
ins = ['春はあけぼの', 'やうやう白くなりゆく山際', '少し明かりて', '紫だちたる雲の細くたなびきたる']
out = do_mecab_vec(ins, '-F%f[6](%f[1]) | ', '-E...ここまで\n')
print(out)
from mecabwrap import do_mecab_iter
ins = ['春はあけぼの', 'やうやう白くなりゆく山際', '少し明かりて', '紫だちたる雲の細くたなびきたる']
print('\n*** generating tokens ***')
i = 0
for text in do_mecab_iter(ins, byline=True):
i += 1
print('(' + str(i) + ')\t' + text)
print('\n*** generating tokenized sentences ***')
i = 0
for text in do_mecab_iter(ins, '-E', '(文の終わり)', byline=False):
i += 1
print('---(' + str(i) + ')\n' + text)
do_mecab('すもももももももものうち', '-osumomo1.txt')
# or,
do_mecab('すもももももももものうち', outpath='sumomo2.txt')
with open('sumomo1.txt') as f:
print(f.read())
with open('sumomo2.txt') as f:
print(f.read())
import os
# clean up
os.remove('sumomo1.txt')
os.remove('sumomo2.txt')
# these get error
try:
res = do_mecab_iter(['すもももももももものうち'], '-osumomo3.txt')
next(res)
except Exception as e:
print(e)
try:
res = do_mecab_iter(['すもももももももものうち'], outpath='sumomo3.txt')
next(res)
except Exception as e:
print(e)
# this cell assumes that mecab-ipadic-neologd is already installed
# otherwise, follow the instruction at https://github.com/neologd/mecab-ipadic-neologd
print("*** Default ipadic ***")
print(do_mecab("メロンパンを食べたい"))
print("*** With ipadic neologd ***")
print(do_mecab("メロンパンを食べたい", dictionary="mecab-ipadic-neologd"))
# this is equivalent to giving the path
dicdir, = !mecab-config --dicdir
print(do_mecab("メロンパンを食べたい",
dictionary=os.path.join(dicdir, "mecab-ipadic-neologd")))
import warnings
x = 'すもももももももものうち!' * 225
print("input buffer size =", len(x.encode()))
with warnings.catch_warnings(record=True) as w:
res1 = list(do_mecab_iter([x]))
# the text is split into two since it exceeds the input buffer size
print("output length =", len(res1))
print('***\nEnd of the first element')
print(res1[0][-150:])
print('***\nBeginning of the second element')
print(res1[1][0:150])
import re
res2 = list(do_mecab_iter([x], auto_buffer_size=True))
print("output length =", len(res2))
print('***\nEnd of the first element')
print(res2[0][-150:])
# count the number of '!', to confirm all 223 repetitions are covered
print('number of "!" =', len(re.findall(r'!', ''.join(res2))))
print()
res3 = list(do_mecab_iter([x], truncate=True))
print("output length =", len(res3))
print('***\nEnd of the first element')
print(res3[0][-150:])
# count the number of '!', to confirm some are not covered due to trancation
print('number of "!" =', len(re.findall(r'!', ''.join(res3))))
from mecabwrap import mecab_batch
x = ["明日は晴れるかな", "雨なら読書をしよう"]
mecab_batch(x)
# use baseform if exists, otherwise surface
mecab_batch(x, format_func=lambda x: x.baseform or x.surface)
mecab_batch(x, format_func=lambda x: x.baseform or x.surface, pos_filter=("名詞", "動詞"))
mecab_batch(x, format_func=lambda x: x.baseform or x.surface,
filter_func=lambda x: len(x.surface)==2)
from mecabwrap import MecabTokenizer
tokenizer = MecabTokenizer(format_func=lambda x: x.surface)
tokenizer.transform(x)
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
x = ["明日は晴れるかな", "明日天気になあれ"]
p = Pipeline([
("mecab", MecabTokenizer(format_func=lambda x: x.surface)),
("tfidf", TfidfVectorizer(tokenizer=lambda x: x, lowercase=False))
])
y = p.fit_transform(x).todense()
pd.DataFrame(y, columns=p.steps[-1][-1].get_feature_names())
o1 = do_mecab('すもももももももものうち') # this works only for python 3
o2 = do_mecab(u'すもももももももものうち') # this works both for python 2 and 3
print(o1)
print(o2)
# show mecab dict
! mecab -D | grep charset
print()
o1 = do_mecab('日本列島改造論', mecab_enc=None) # default
print(o1)
o2 = do_mecab('日本列島改造論', mecab_enc='utf-8') # explicitly specified
print(o2)
#o3 = do_mecab('日本列島改造論', mecab_enc='cp932') # wrong encoding, fails
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Usage
Step2: Token is defined as a namedtuple (v0.3.2+) with the following fields
Step3: Using MeCab Options
Step4: The exapmle below uses do_mecab_vec to parse multiple texts.
Step5: Returning Iterators
Step6: Writing the outcome to a file
Step7: Using Dictionary (v0.3.0+)
Step8: Very Long Input and Buffer Size (v0.2.3+)
Step9: Batch processing (v0.3.2+)
Step10: By default, each string is converted into a list of Token objects.
Step11: We can filter certain part-of-speeches by pos_filter option.
Step12: Scikit-learn compatible transformer
Step13: Note on Python 2
Step14: Note on dictionary encodings
|
177 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q -U "tensorflow-text==2.8.*"
import tensorflow as tf
import tensorflow_text as text
import functools
examples = {
"text_a": [
"Sponge bob Squarepants is an Avenger",
"Marvel Avengers"
],
"text_b": [
"Barack Obama is the President.",
"President is the highest office"
],
}
dataset = tf.data.Dataset.from_tensor_slices(examples)
next(iter(dataset))
_VOCAB = [
# Special tokens
b"[UNK]", b"[MASK]", b"[RANDOM]", b"[CLS]", b"[SEP]",
# Suffixes
b"##ack", b"##ama", b"##ger", b"##gers", b"##onge", b"##pants", b"##uare",
b"##vel", b"##ven", b"an", b"A", b"Bar", b"Hates", b"Mar", b"Ob",
b"Patrick", b"President", b"Sp", b"Sq", b"bob", b"box", b"has", b"highest",
b"is", b"office", b"the",
]
_START_TOKEN = _VOCAB.index(b"[CLS]")
_END_TOKEN = _VOCAB.index(b"[SEP]")
_MASK_TOKEN = _VOCAB.index(b"[MASK]")
_RANDOM_TOKEN = _VOCAB.index(b"[RANDOM]")
_UNK_TOKEN = _VOCAB.index(b"[UNK]")
_MAX_SEQ_LEN = 8
_MAX_PREDICTIONS_PER_BATCH = 5
_VOCAB_SIZE = len(_VOCAB)
lookup_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(
keys=_VOCAB,
key_dtype=tf.string,
values=tf.range(
tf.size(_VOCAB, out_type=tf.int64), dtype=tf.int64),
value_dtype=tf.int64
),
num_oov_buckets=1
)
bert_tokenizer = text.BertTokenizer(lookup_table, token_out_type=tf.string)
bert_tokenizer.tokenize(examples["text_a"])
bert_tokenizer.tokenize(examples["text_b"])
bert_tokenizer = text.BertTokenizer(lookup_table, token_out_type=tf.int64)
segment_a = bert_tokenizer.tokenize(examples["text_a"])
segment_a
segment_b = bert_tokenizer.tokenize(examples["text_b"])
segment_b
segment_a = segment_a.merge_dims(-2, -1)
segment_a
segment_b = segment_b.merge_dims(-2, -1)
segment_b
trimmer = text.RoundRobinTrimmer(max_seq_length=_MAX_SEQ_LEN)
trimmed = trimmer.trim([segment_a, segment_b])
trimmed
segments_combined, segments_ids = text.combine_segments(
trimmed,
start_of_sequence_id=_START_TOKEN, end_of_segment_id=_END_TOKEN)
segments_combined, segments_ids
random_selector = text.RandomItemSelector(
max_selections_per_batch=_MAX_PREDICTIONS_PER_BATCH,
selection_rate=0.2,
unselectable_ids=[_START_TOKEN, _END_TOKEN, _UNK_TOKEN]
)
selected = random_selector.get_selection_mask(
segments_combined, axis=1)
selected
mask_values_chooser = text.MaskValuesChooser(_VOCAB_SIZE, _MASK_TOKEN, 0.8)
mask_values_chooser.get_mask_values(segments_combined)
masked_token_ids, masked_pos, masked_lm_ids = text.mask_language_model(
segments_combined,
item_selector=random_selector, mask_values_chooser=mask_values_chooser)
masked_token_ids
tf.gather(_VOCAB, masked_token_ids)
masked_pos
masked_lm_ids
tf.gather(_VOCAB, masked_lm_ids)
# Prepare and pad combined segment inputs
input_word_ids, input_mask = text.pad_model_inputs(
masked_token_ids, max_seq_length=_MAX_SEQ_LEN)
input_type_ids, _ = text.pad_model_inputs(
segments_ids, max_seq_length=_MAX_SEQ_LEN)
# Prepare and pad masking task inputs
masked_lm_positions, masked_lm_weights = text.pad_model_inputs(
masked_pos, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
masked_lm_ids, _ = text.pad_model_inputs(
masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
model_inputs = {
"input_word_ids": input_word_ids,
"input_mask": input_mask,
"input_type_ids": input_type_ids,
"masked_lm_ids": masked_lm_ids,
"masked_lm_positions": masked_lm_positions,
"masked_lm_weights": masked_lm_weights,
}
model_inputs
def bert_pretrain_preprocess(vocab_table, features):
# Input is a string Tensor of documents, shape [batch, 1].
text_a = features["text_a"]
text_b = features["text_b"]
# Tokenize segments to shape [num_sentences, (num_words)] each.
tokenizer = text.BertTokenizer(
vocab_table,
token_out_type=tf.int64)
segments = [tokenizer.tokenize(text).merge_dims(
1, -1) for text in (text_a, text_b)]
# Truncate inputs to a maximum length.
trimmer = text.RoundRobinTrimmer(max_seq_length=6)
trimmed_segments = trimmer.trim(segments)
# Combine segments, get segment ids and add special tokens.
segments_combined, segment_ids = text.combine_segments(
trimmed_segments,
start_of_sequence_id=_START_TOKEN,
end_of_segment_id=_END_TOKEN)
# Apply dynamic masking task.
masked_input_ids, masked_lm_positions, masked_lm_ids = (
text.mask_language_model(
segments_combined,
random_selector,
mask_values_chooser,
)
)
# Prepare and pad combined segment inputs
input_word_ids, input_mask = text.pad_model_inputs(
masked_input_ids, max_seq_length=_MAX_SEQ_LEN)
input_type_ids, _ = text.pad_model_inputs(
segment_ids, max_seq_length=_MAX_SEQ_LEN)
# Prepare and pad masking task inputs
masked_lm_positions, masked_lm_weights = text.pad_model_inputs(
masked_lm_positions, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
masked_lm_ids, _ = text.pad_model_inputs(
masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
model_inputs = {
"input_word_ids": input_word_ids,
"input_mask": input_mask,
"input_type_ids": input_type_ids,
"masked_lm_ids": masked_lm_ids,
"masked_lm_positions": masked_lm_positions,
"masked_lm_weights": masked_lm_weights,
}
return model_inputs
dataset = (
tf.data.Dataset.from_tensors(examples)
.map(functools.partial(bert_pretrain_preprocess, lookup_table))
)
next(iter(dataset))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BERT Preprocessing with TF Text
Step2: Our data contains two text features and we can create a example tf.data.Dataset. Our goal is to create a function that we can supply Dataset.map() with to be used in training.
Step3: Tokenizing
Step4: Let's construct a text.BertTokenizer using the above vocabulary and tokenize the text inputs into a RaggedTensor.`.
Step5: Text output from text.BertTokenizer allows us see how the text is being tokenized, but the model requires integer IDs. We can set the token_out_type param to tf.int64 to obtain integer IDs (which are the indices into the vocabulary).
Step6: text.BertTokenizer returns a RaggedTensor with shape [batch, num_tokens, num_wordpieces]. Because we don't need the extra num_tokens dimensions for our current use case, we can merge the last two dimensions to obtain a RaggedTensor with shape [batch, num_wordpieces]
Step7: Content Trimming
Step8: trimmed now contains the segments where the number of elements across a batch is 8 elements (when concatenated along axis=-1).
Step9: Masked Language Model Task
Step10: Choosing the Masked Value
Step11: When supplied with a RaggedTensor input, text.MaskValuesChooser returns a RaggedTensor of the same shape with either _MASK_VALUE (0), a random ID, or the same unchanged id.
Step12: Let's dive deeper and examine the outputs of mask_language_model(). The output of masked_token_ids is
Step13: Remember that our input is encoded using a vocabulary. If we decode masked_token_ids using our vocabulary, we get
Step14: Notice that some wordpiece tokens have been replaced with either [MASK], [RANDOM] or a different ID value. masked_pos output gives us the indices (in the respective batch) of the tokens that have been replaced.
Step15: masked_lm_ids gives us the original value of the token.
Step16: We can again decode the IDs here to get human readable values.
Step17: Padding Model Inputs
Step18: Review
Step19: We previously constructed a tf.data.Dataset and we can now use our assembled preprocessing function bert_pretrain_preprocess() in Dataset.map(). This allows us to create an input pipeline for transforming our raw string data into integer inputs and feed directly into our model.
|
178 | <ASSISTANT_TASK:>
Python Code:
# baseline confirmation, implying that model has to perform at least as good as it
from sklearn.dummy import DummyClassifier
clf_Dummy = DummyClassifier(strategy='most_frequent')
clf_Dummy = clf_Dummy.fit(X_train, y_train)
print('baseline score =>', round(clf_Dummy.score(X_test, y_test), 2))
from sklearn.metrics import recall_score
from sklearn.ensemble import RandomForestClassifier
from matplotlib.pyplot import axvline, axhline
recall_range = []
n_estimator_range = []
for i in np.arange(10, 20, 1):
clf_RF = RandomForestClassifier(oob_score=True, n_estimators=i).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
recall = round(recall_score(clf_RF_predicted, y_test), 2)
n_estimator_range.append(i)
recall_range.append(recall)
dictionary = dict(zip(n_estimator_range, recall_range))
plt.figure(figsize=(10, 3))
plt.plot(n_estimator_range, recall_range, color='#EA5959', label='max recall: %(n)0.2f \n%(s)s: %(v)2d' %
{'n':max(dictionary.values()), 's':'n estimator', 'v':max(dictionary, key=lambda i: dictionary[i])})
plt.scatter([max(dictionary, key=lambda i: dictionary[i]), ], [max(dictionary.values()), ], 80, color='#EA5959')
axhline(max(dictionary.values()), color='#EA5959', linewidth=1, linestyle='--')
axvline(max(dictionary, key=lambda i: dictionary[i]), color='#EA5959', linewidth=1, linestyle='--')
plt.legend(loc='lower right', prop={'size':12})
plt.xlim(min(n_estimator_range), max(n_estimator_range))
plt.ylim(min(recall_range)*0.98, max(recall_range)*1.02)
plt.ylabel('Recall')
plt.xlabel('n estimator');
recall_range = []
max_features_range = []
for i in np.arange(1, 15, 1):
clf_RF = RandomForestClassifier(oob_score=True, n_estimators=18, max_features=i).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
recall = round(recall_score(clf_RF_predicted, y_test), 2)
max_features_range.append(i)
recall_range.append(recall)
dictionary = dict(zip(max_features_range, recall_range))
plt.figure(figsize=(10, 3))
plt.plot(max_features_range, recall_range, color='#EA5959', label='max recall: %(n)0.2f \n%(s)s: %(v)2d' %
{'n':max(dictionary.values()), 's':'max features', 'v':max(dictionary, key=lambda i: dictionary[i])})
plt.scatter([max(dictionary, key=lambda i: dictionary[i]), ], [max(dictionary.values()), ], 80, color='#EA5959')
axhline(max(dictionary.values()), color='#EA5959', linewidth=1, linestyle='--')
axvline(max(dictionary, key=lambda i: dictionary[i]), color='#EA5959', linewidth=1, linestyle='--')
plt.legend(loc='lower right', prop={'size':12})
plt.xlim(min(max_features_range), max(max_features_range))
plt.ylim(min(recall_range)*0.98, max(recall_range)*1.02)
plt.ylabel('Recall')
plt.xlabel('max features');
recall_range = []
min_samples_leaf_range = []
for i in np.arange(1, 20, 1):
clf_RF = RandomForestClassifier(oob_score=True, n_estimators=18, max_features=14, min_samples_leaf=i).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
recall = round(recall_score(clf_RF_predicted, y_test), 2)
min_samples_leaf_range.append(i)
recall_range.append(recall)
dictionary = dict(zip(min_samples_leaf_range, recall_range))
plt.figure(figsize=(10, 3))
plt.plot(min_samples_leaf_range, recall_range, color='#EA5959', label='max recall: %(n)0.2f \n%(s)s: %(v)2d' %
{'n':max(dictionary.values()), 's':'min samples leaf', 'v':max(dictionary, key=lambda i: dictionary[i])})
plt.scatter([max(dictionary, key=lambda i: dictionary[i]), ], [max(dictionary.values()), ], 80, color='#EA5959')
axhline(max(dictionary.values()), color='#EA5959', linewidth=1, linestyle='--')
axvline(max(dictionary, key=lambda i: dictionary[i]), color='#EA5959', linewidth=1, linestyle='--')
plt.legend(loc='lower right', prop={'size':12})
plt.xlim(min(min_samples_leaf_range), max(min_samples_leaf_range))
plt.ylim(min(recall_range)*0.98, max(recall_range)*1.02)
plt.ylabel('Recall')
plt.xlabel('min_samples_leaf_range');
from sklearn.pipeline import Pipeline
pipeline_clf_train = Pipeline(
steps=[
('clf_RF', RandomForestClassifier()),
]
);
from sklearn.grid_search import GridSearchCV
parameters = {
'clf_RF__min_samples_leaf' : np.arange(1, 28, 1),
'clf_RF__max_features' : np.arange(10, 28, 1),
'clf_RF__criterion' :['gini', 'entropy'],
'clf_RF__n_estimators' : [10],
#'clf_RF__oob_score' : ['True]
}
gs_clf = GridSearchCV(pipeline_clf_train, parameters, n_jobs=-1, scoring='recall')
gs_clf = gs_clf.fit(X_train, y_train)
best_parameters, score, _ = max(gs_clf.grid_scores_, key=lambda x: x[1])
for param_name in sorted(parameters.keys()):
print("%s: %r" % (param_name, best_parameters[param_name]))
print('------------------------------')
print('recall score :', score.round(2))
clf_RF = RandomForestClassifier(n_estimators=18, max_features=14, min_samples_leaf=9, oob_score=True).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
target_names = ['not helpful', 'helpful']
print(classification_report(y_test, clf_RF_predicted, target_names=target_names))
plt.figure(figsize=(4,4))
cm = confusion_matrix(y_test, clf_RF_predicted)
print(cm)
target_names = ['not helpful', 'helpful']
plt.grid(False)
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Confusion matrix')
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label');
clf_RF = RandomForestClassifier(n_estimators=10, max_features=19, min_samples_leaf=27, criterion='entropy', oob_score=True).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
target_names = ['not helpful', 'helpful']
print(classification_report(y_test, clf_RF_predicted, target_names=target_names))
plt.figure(figsize=(4, 4))
cm = confusion_matrix(y_test, clf_RF_predicted)
print(cm)
target_names = ['not helpful', 'helpful']
plt.grid(False)
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Confusion matrix')
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label');
clf_RF = RandomForestClassifier(n_estimators=18, max_features=14, min_samples_leaf=9, oob_score=True).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
from sklearn.metrics import roc_curve, auc
fpr_rf, tpr_rf, thresholds_rf = roc_curve(y_test, clf_RF.predict_proba(X_test)[:, 0], pos_label=0)
fpr_base, tpr_base, thresholds_base = roc_curve(y_test,clf_Dummy.predict_proba(X_test)[:, 0], pos_label=1)
plt.figure(figsize=(5, 5))
plt.plot(fpr_rf, tpr_rf, color='#E45A84', linewidth=3, linestyle='-',
label = 'random forest: %(performance)0.2f' % {'performance':auc(fpr_rf, tpr_rf)})
plt.plot(fpr_base, tpr_base, color='#FFACAC', linewidth=2, linestyle='--',
label = 'baseline: %(performance)0.2f' % {'performance':auc(fpr_base, tpr_base)})
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('ROC (Receiver operating characteristic)', fontdict={'fontsize': 12})
plt.legend(loc="lower right");
clf_RF = RandomForestClassifier(n_estimators=10, max_features=19, min_samples_leaf=27, criterion='entropy', oob_score=True).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
fpr_rf, tpr_rf, thresholds_rf = roc_curve(y_test, clf_RF.predict_proba(X_test)[:, 0], pos_label=0)
fpr_base, tpr_base, thresholds_base = roc_curve(y_test,clf_Dummy.predict_proba(X_test)[:, 0], pos_label=1)
plt.figure(figsize=(5, 5))
plt.plot(fpr_rf, tpr_rf, color='#E45A84', linewidth=3, linestyle='-',
label = 'random forest: %(performance)0.2f' % {'performance':auc(fpr_rf, tpr_rf)})
plt.plot(fpr_base, tpr_base, color='#FFACAC', linewidth=2, linestyle='--',
label = 'baseline: %(performance)0.2f' % {'performance':auc(fpr_base, tpr_base)})
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('ROC (Receiver operating characteristic)', fontdict={'fontsize': 12})
plt.legend(loc="lower right");
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ===================================================================================================================
Step2: (2) max features
Step3: (3) min sample leaf
Step4: 2) Grid Search
Step5: 2. Take a look at confusion matrix (what is confusion matrix?)
Step6: 2) Grid Search
Step7: 3. ROC Curve (what is ROC/AUC?)
Step8: 2) Grid Search
|
179 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
'''
count_times = the time since the start of data-taking when the data was
taken (in seconds)
count_rates = the number of counts since the last time data was taken, at
the time in count_times
'''
count_times = np.loadtxt("count_rates.txt", dtype=int)[0]
count_rates = np.loadtxt("count_rates.txt", dtype=int)[1]
# Put your code here - add additional cells if necessary
import pandas
erie = pandas.read_csv('erie1918Ann.csv', skiprows=2)
miHuron = pandas.read_csv('miHuron1918Ann.csv', skiprows=2)
ontario = pandas.read_csv('ontario1918Ann.csv', skiprows=2)
superior = pandas.read_csv('superior1918Ann.csv', skiprows=2)
# Put your code here!
# Put your code here
# Put your code here
# put your code and plots here!
# put your code and plots here!
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/Px7wk9DcldfyCqMt2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: Question 1
Step3: Question 2
Step4: Question 3
Step5: Section 3
Step6: Part 2
Step8: Section 4
|
180 | <ASSISTANT_TASK:>
Python Code:
%reload_ext autoreload
%autoreload 2
import sys
sys.path.append('..')
from helper import nn
from helper import logistic_regression as lr
import numpy as np
X_raw, y_raw = nn.load_data('ex4data1.mat', transpose=False)
X = np.insert(X_raw, 0, np.ones(X_raw.shape[0]), axis=1)
X.shape
y_raw
y = nn.expand_y(y_raw)
y
t1, t2 = nn.load_weight('ex4weights.mat')
t1.shape, t2.shape
theta = nn.serialize(t1, t2) # flatten params
theta.shape
_, _, _, _, h = nn.feed_forward(theta, X)
h # 5000*10
nn.cost(theta, X, y)
nn.regularized_cost(theta, X, y)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: prepare data
Step2:
Step3: load weight
Step4: feed forward
Step5: cost function
Step6: regularized cost function
|
181 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
np.zeros(10)
np.ones(10)
np.ones(10) * 5
np.arange(10,51)
np.arange(10,51,2)
np.arange(9).reshape(3,3)
np.eye(3)
np.random.rand(1)
np.random.randn(25)
np.arange(1,101).reshape(10,10) / 100
np.linspace(0,1,20)
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:,1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3,4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[:3,1:2]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[4,:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3:5,:]
mat.sum()
mat.std()
mat.sum(axis=0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an array of 10 zeros
Step2: Create an array of 10 ones
Step3: Create an array of 10 fives
Step4: Create an array of the integers from 10 to 50
Step5: Create an array of all the even integers from 10 to 50
Step6: Create a 3x3 matrix with values ranging from 0 to 8
Step7: Create a 3x3 identity matrix
Step8: Use NumPy to generate a random number between 0 and 1
Step9: Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
Step10: Create the following matrix
Step11: Create an array of 20 linearly spaced points between 0 and 1
Step12: Numpy Indexing and Selection
Step13: Now do the following
Step14: Get the standard deviation of the values in mat
Step15: Get the sum of all the columns in mat
|
182 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import integrate
from matplotlib.pylab import *
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
def vdp1(t, y):
return np.array([y[1], (1 - y[0]**2)*y[1] - y[0]])
t0, t1 = 0, 20 # start and end
t = np.linspace(t0, t1, 100) # the points of evaluation of solution
y0 = [2, 0] # initial value
y = np.zeros((len(t), len(y0))) # array for solution
y[0, :] = y0
r = integrate.ode(vdp1).set_integrator("dopri5") # choice of method
r.set_initial_value(y0, t0) # initial values
for i in range(1, t.size):
y[i, :] = r.integrate(t[i]) # get one more value, add it to the array
if not r.successful():
raise RuntimeError("Could not integrate")
plt.plot(t, y)
plt.show()
P = 9 #MPa
T = 323 # K
Q = 8.83 #g/min
e = 0.4
rho = 285 #kg/m3
miu = 2.31e-5 # Pa*s
dp = 0.75e-3 # m
Dl = 0.24e-5 #m2/s
De = 8.48e-12 # m2/s
Di = 6e-13
u = 0.455e-3 #m/s
kf = 1.91e-5 #m/s
de = 0.06 # m
W = 0.160 # kg
kp = 0.2
r = 0.31 #m
n = 10
V = 12
#C = kp * qE
C = 0.1
qE = C / kp
Cn = 0.05
Cm = 0.02
t = np.linspace(0,10, 1)
ti = (r ** 2) / (15 * Di)
def reverchon(x,t):
#Ecuaciones diferenciales del modelo Reverchon
#dCdt = - (n/(e * V)) * (W * (Cn - Cm) / rho + (1 - e) * V * dqdt)
#dqdt = - (1 / ti) * (q - qE)
q = x[0]
C = x[1]
qE = C / kp
dqdt = - (1 / ti) * (q - qE)
dCdt = - (n/(e * V)) * (W * (C - Cm) / rho + (1 - e) * V * dqdt)
return [dqdt, dCdt]
reverchon([1, 2], 0)
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, qR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C solid–fluid interface [=] $kg/m^3$")
print(CR)
r = 0.31 #m
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
r = 0.231 #m
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
fig,axes=plt.subplots(2,2)
axes[0,0].plot(t,CR)
axes[1,0].plot(t,qR)
#Datos experimentales
x_data = np.linspace(0,9,10)
y_data = np.array([0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309])
def f(y, t, k):
sistema de ecuaciones diferenciales ordinarias
return (-k[0]*y[0], k[0]*y[0]-k[1]*y[1], k[1]*y[1])
def my_ls_func(x,teta):
f2 = lambda y, t: f(y, t, teta)
# calcular el valor de la ecuación diferencial en cada punto
r = integrate.odeint(f2, y0, x)
return r[:,1]
def f_resid(p):
# definir la función de minimos cuadrados para cada valor de y
return y_data - my_ls_func(x_data,p)
#resolver el problema de optimización
guess = [0.2, 0.3] #valores inicales para los parámetros
y0 = [1,0,0] #valores inciales para el sistema de ODEs
(c, kvg) = optimize.leastsq(f_resid, guess) #get params
print("parameter values are ",c)
# interpolar los valores de las ODEs usando splines
xeval = np.linspace(min(x_data), max(x_data),30)
gls = interpolate.UnivariateSpline(xeval, my_ls_func(xeval,c), k=3, s=0)
xeval = np.linspace(min(x_data), max(x_data), 200)
#Gráficar los resultados
pp.plot(x_data, y_data,'.r',xeval,gls(xeval),'-b')
pp.xlabel('t [=] min',{"fontsize":16})
pp.ylabel("C",{"fontsize":16})
pp.legend(('Datos','Modelo'),loc=0)
pp.show()
f_resid(guess)
#Datos experimentales
x_data = np.linspace(0,9,10)
y_data = np.array([0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309])
print(y_data)
# def f(y, t, k):
# sistema de ecuaciones diferenciales ordinarias
# return (-k[0]*y[0], k[0]*y[0]-k[1]*y[1], k[1]*y[1])
def reverchon(x,t,Di):
#Ecuaciones diferenciales del modelo Reverchon
#dCdt = - (n/(e * V)) * (W * (Cn - Cm) / rho + (1 - e) * V * dqdt)
#dqdt = - (1 / ti) * (q - qE)
q = x[0]
C = x[1]
qE = C / kp
ti = (r**2) / (15 * Di)
dqdt = - (1 / ti) * (q - qE)
dCdt = - (n/(e * V)) * (W * (C - Cm) / rho + (1 - e) * V * dqdt)
return [dqdt, dCdt]
def my_ls_func(x,teta):
f2 = lambda y, t: reverchon(y, t, teta)
# calcular el valor de la ecuación diferencial en cada punto
rr = integrate.odeint(f2, y0, x)
print(f2)
return rr[:,1]
def f_resid(p):
# definir la función de minimos cuadrados para cada valor de y
return y_data - my_ls_func(p,x_data)
#resolver el problema de optimización
guess = np.array([0.2]) #valores inicales para los parámetros
y0 = [0,0] #valores inciales para el sistema de ODEs
(c, kvg) = optimize.leastsq(f_resid, guess) #get params
print("parameter values are ",c)
# interpolar los valores de las ODEs usando splines
xeval = np.linspace(min(x_data), max(x_data),30)
gls = interpolate.UnivariateSpline(xeval, my_ls_func(xeval,c), k=3, s=0)
xeval = np.linspace(min(x_data), max(x_data), 200)
#Gráficar los resultados
pp.plot(x_data, y_data,'.r',xeval,gls(xeval),'-b')
pp.xlabel('t [=] min',{"fontsize":16})
pp.ylabel("C",{"fontsize":16})
pp.legend(('Datos','Modelo'),loc=0)
pp.show()
def my_ls_func(x,teta):
f2 = lambda y, t: reverchon(y, t, teta)
# calcular el valor de la ecuación diferencial en cada punto
r = integrate.odeint(f2, y0, x)
print(f2)
return r[:,1]
my_ls_func(y0,guess)
f_resid(guess)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejemplo 2 funciona
Step2: Fonction
Step6: Trabajo futuro
|
183 | <ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import xarray as xr
# Any import of metpy will activate the accessors
import metpy.calc as mpcalc
from metpy.testing import get_test_data
# Open the netCDF file as a xarray Dataset
data = xr.open_dataset(get_test_data('irma_gfs_example.nc', False))
# View a summary of the Dataset
print(data)
# To parse the full dataset, we can call parse_cf without an argument, and assign the returned
# Dataset.
data = data.metpy.parse_cf()
# If we instead want just a single variable, we can pass that variable name to parse_cf and
# it will return just that data variable as a DataArray.
data_var = data.metpy.parse_cf('Temperature_isobaric')
# To rename variables, supply a dictionary between old and new names to the rename method
data.rename({
'Vertical_velocity_pressure_isobaric': 'omega',
'Relative_humidity_isobaric': 'relative_humidity',
'Temperature_isobaric': 'temperature',
'u-component_of_wind_isobaric': 'u',
'v-component_of_wind_isobaric': 'v',
'Geopotential_height_isobaric': 'height'
}, inplace=True)
data['isobaric1'].metpy.convert_units('hPa')
data['isobaric3'].metpy.convert_units('hPa')
# Get multiple coordinates (for example, in just the x and y direction)
x, y = data['temperature'].metpy.coordinates('x', 'y')
# If we want to get just a single coordinate from the coordinates method, we have to use
# tuple unpacking because the coordinates method returns a generator
vertical, = data['temperature'].metpy.coordinates('vertical')
# Or, we can just get a coordinate from the property
time = data['temperature'].metpy.time
# To verify, we can inspect all their names
print([coord.name for coord in (x, y, vertical, time)])
data_crs = data['temperature'].metpy.cartopy_crs
print(data_crs)
data_globe = data['temperature'].metpy.cartopy_globe
print(data_globe)
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat, initstring=data_crs.proj4_init)
heights = data['height'].loc[time[0]].loc[{vertical.name: 500.}]
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
heights = data['height'].loc[time[0]].loc[{vertical.name: 500.}]
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.grid_deltas_from_dataarray(heights)
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
# A very simple example example of a plot of 500 hPa heights
data['height'].loc[time[0]].loc[{vertical.name: 500.}].plot()
plt.show()
# Let's add a projection and coastlines to it
ax = plt.axes(projection=ccrs.LambertConformal())
ax._hold = True # Work-around for CartoPy 0.16/Matplotlib 3.0.0 incompatibility
data['height'].loc[time[0]].loc[{vertical.name: 500.}].plot(ax=ax, transform=data_crs)
ax.coastlines()
plt.show()
# Or, let's make a full 500 hPa map with heights, temperature, winds, and humidity
# Select the data for this time and level
data_level = data.loc[{vertical.name: 500., time.name: time[0]}]
# Create the matplotlib figure and axis
fig, ax = plt.subplots(1, 1, figsize=(12, 8), subplot_kw={'projection': data_crs})
# Plot RH as filled contours
rh = ax.contourf(x, y, data_level['relative_humidity'], levels=[70, 80, 90, 100],
colors=['#99ff00', '#00ff00', '#00cc00'])
# Plot wind barbs, but not all of them
wind_slice = slice(5, -5, 5)
ax.barbs(x[wind_slice], y[wind_slice],
data_level['u'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
data_level['v'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
length=6)
# Plot heights and temperature as contours
h_contour = ax.contour(x, y, data_level['height'], colors='k', levels=range(5400, 6000, 60))
h_contour.clabel(fontsize=8, colors='k', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
t_contour = ax.contour(x, y, data_level['temperature'], colors='xkcd:deep blue',
levels=range(248, 276, 2), alpha=0.8, linestyles='--')
t_contour.clabel(fontsize=8, colors='xkcd:deep blue', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Add geographic features
ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor=cfeature.COLORS['land'])
ax.add_feature(cfeature.OCEAN.with_scale('50m'), facecolor=cfeature.COLORS['water'])
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='#c7c783', zorder=0)
ax.add_feature(cfeature.LAKES.with_scale('50m'), facecolor=cfeature.COLORS['water'],
edgecolor='#c7c783', zorder=0)
# Set a title and show the plot
ax.set_title(('500 hPa Heights (m), Temperature (K), Humidity (%) at ' +
time[0].dt.strftime('%Y-%m-%d %H:%MZ')))
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting Data
Step2: Preparing Data
Step3: Units
Step4: Coordinates
Step5: Projections
Step6: The cartopy Globe can similarly be accessed via the data_var.metpy.cartopy_globe
Step7: Calculations
Step8: Also, a limited number of calculations directly support xarray DataArrays or Datasets (they
Step9: Plotting
|
184 | <ASSISTANT_TASK:>
Python Code:
# a simple function that looks like a mathematical function
# define a function called add_two_numbers that take 2 arguments: num1 and num2
def add_two_numbers(num1, num2):
# Under the def must be indented
return num1 + num2 # use the return statment to tell the function what to return
add_two_numbers(905, 90)
# written a different way
# define a function called add_two_numbers that take 2 arguments: num1 and num2
def add_two_numbers(num1, num2):
total = num1 + num2 # do the stuff
# This is the body of the function
return total # use the return statment to tell the function what to return
result = add_two_numbers(905, 90)
print(result)
print(add_two_numbers(905, 90))
# write your function here
# Run this cell after defining your function
print(quote('name', '"'))
# write your function here
# Run this cell after defining your function
print(outer('helium'))
def fahr_to_kelvin(temp_f):
# write your function here
help(round)
# Run this cell after adding documentation
help(fahr_to_kelvin)
# write your function here
# write your function here
# write your function here
def display(a=1, b=2, c=3):
print('a:', a, 'b:', b, 'c:', c)
print('no parameters:')
display()
print('one parameter:')
display(55)
print('two parameters:')
display(55, 66)
print('only setting the value of c')
display(c=77)
import numpy
# Why does this not work?
# What is wrong? How to fix it?
numpy.loadtxt('LabTempHourlyJanuary2017.csv', ',')
help(numpy.loadtxt)
def convert_temp(temp, temp_type='F'):
# write your function here
# Run this cell after writing convert_temp
assert(convert_temp(-40.0, 'F'), -40.0)
assert(convert_temp(0.0, 'C'), 32.0)
assert(convert_temp(32.0, 'F'), 0.0)
assert(convert_temp(54.0), 12.2)
assert(convert_temp(12.2, 'C'), 54.0)
# write your code here
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use return ... to give a value back to the caller. A function that doesn’t explicitly return a value automatically returns None.
Step2: Question 00
Step3: Practice 01
Step4: Question 01
Step5: COMMIT YOUR WORK
Step6: Adding documentation to your own code is simple and easy.
Step7: COMMIT YOUR WORK
Step8: COMMIT YOUR WORK
Step9: This is our first taste of how larger programs are built
Step10: COMMIT YOUR WORK
Step11: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in
Step12: The filename is assigned to fname (which is what we want), but the delimiter string ',' is assigned to dtype rather than delimiter, because dtype is the second parameter in the list. However ‘,’ isn’t a known dtype so our code produced an error message when we tried to run it. When we call loadtxt we don’t have to provide fname= for the filename because it’s the first item in the list, but if we want the ‘,’ to be assigned to the variable delimiter, we do have to provide delimiter= for the second parameter since delimiter is not the second parameter in the list.
Step13: COMMIT YOUR WORK
|
185 | <ASSISTANT_TASK:>
Python Code:
x = np.array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604,
5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745,
1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 ,
0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357,
1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ])
_ = plt.hist(x, bins=7)
precip = pd.read_table("../data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True)
precip.head()
_ = precip.hist(sharex=True, sharey=True, grid=False)
plt.tight_layout()
precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True)
precip_mean = precip.mean()
precip_mean
precip_var = precip.var()
precip_var
alpha_mom = precip_mean ** 2 / precip_var
beta_mom = precip_var / precip_mean
alpha_mom, beta_mom
from scipy.stats.distributions import gamma
precip.Jan.hist(normed=True, bins=20)
plt.plot(np.linspace(0, 10), gamma.pdf(np.linspace(0, 10), alpha_mom[0], beta_mom[0]))
axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False)
for ax in axs.ravel():
# Get month
m = ax.get_title()
# Plot fitted distribution
x = np.linspace(*ax.get_xlim())
ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m]))
# Annotate with parameter estimates
label = 'alpha = {0:.2f}\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m])
ax.annotate(label, xy=(10, 0.2))
plt.tight_layout()
y = np.random.poisson(5, size=100)
plt.hist(y, bins=12, normed=True)
plt.xlabel('y'); plt.ylabel('Pr(y)')
poisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod()
lam = 6
value = 10
poisson_like(value, lam)
np.sum(poisson_like(yi, lam) for yi in y)
lam = 8
np.sum(poisson_like(yi, lam) for yi in y)
lambdas = np.linspace(0,15)
x = 5
plt.plot(lambdas, [poisson_like(x, l) for l in lambdas])
plt.xlabel('$\lambda$')
plt.ylabel('L($\lambda$|x={0})'.format(x))
lam = 5
xvals = np.arange(15)
plt.bar(xvals, [poisson_like(x, lam) for x in xvals], width=0.2)
plt.xlabel('x')
plt.ylabel('Pr(X|$\lambda$=5)')
from scipy.optimize import newton
%run newton_raphson_plot.py
from scipy.special import psi, polygamma
dlgamma = lambda a, log_mean, mean_log: np.log(a) - psi(a) - log_mean + mean_log
dl2gamma = lambda a, *args: 1./a - polygamma(1, a)
# Calculate statistics
log_mean = precip.mean().apply(np.log)
mean_log = precip.apply(np.log).mean()
# Alpha MLE for December
alpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1]))
alpha_mle
beta_mle = alpha_mle/precip.mean()[-1]
beta_mle
dec = precip.Dec
dec.hist(normed=True, bins=10, grid=False)
x = np.linspace(0, dec.max())
plt.plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-', label='Moment estimator')
plt.plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--', label='ML estimator')
plt.legend()
from scipy.stats import gamma
gamma.fit(precip.Dec)
from scipy.stats import probplot
probplot(precip.Dec, dist=gamma(3.51, scale=0.84), plot=plt);
x = np.random.normal(size=10000)
# Truncation point
a = -1
# Resample until all points meet criterion
x_small = x < a
while x_small.sum():
x[x_small] = np.random.normal(size=x_small.sum())
x_small = x < a
_ = plt.hist(x, bins=100)
from scipy.stats.distributions import norm
trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) -
np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum()
from scipy.optimize import fmin
fmin(trunc_norm, np.array([1,2]), args=(-1, x))
# Some random data
y = np.random.normal(10, size=15)
y
x = np.linspace(7, 13, 100)
# Smoothing parameter
s = 0.3
# Calculate the kernels
kernels = np.transpose([norm.pdf(x, yi, s) for yi in y])
plt.plot(x, kernels, 'k:')
plt.plot(x, kernels.sum(1))
plt.plot(y, np.zeros(len(y)), 'ro', ms=10)
# Create a bi-modal distribution with a mixture of Normals.
x1 = np.random.normal(0, 2, 50)
x2 = np.random.normal(5, 1, 50)
# Append by row
x = np.r_[x1, x2]
plt.hist(x, bins=10, normed=True)
from scipy.stats import kde
density = kde.gaussian_kde(x)
xgrid = np.linspace(x.min(), x.max(), 100)
plt.hist(x, bins=8, normed=True)
plt.plot(xgrid, density(xgrid), 'r-')
# Write your answer here
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting data to probability distributions
Step2: The first step is recognizing what sort of distribution to fit our data to. A couple of observations
Step3: Now, let's calculate the sample moments of interest, the means and variances by month
Step4: We then use these moments to estimate $\alpha$ and $\beta$ for each month
Step5: We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January
Step6: Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution
Step7: Maximum Likelihood
Step8: The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$.
Step9: We can plot the likelihood function for any value of the parameter(s)
Step10: How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$.
Step11: Why are we interested in the likelihood function?
Step12: Here is a graphical example of how Newton-Raphson converges on a solution, using an arbitrary function
Step13: To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. The second derivative of the gamma distribution with respect to $\alpha$ is
Step14: where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function.
Step15: Time to optimize!
Step16: And now plug this back into the solution for beta
Step17: We can compare the fit of the estimates derived from MLE to those from the method of moments
Step18: For some common distributions, SciPy includes methods for fitting via MLE
Step19: This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution.
Step20: Example
Step21: We can construct a log likelihood for this function using the conditional form
Step22: For this example, we will use an optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages
Step23: In general, simulating data is a terrific way of testing your model before using it with real data.
Step24: The kernel estimator is a sum of symmetric densities centered at each observation. The selected kernel function determines the shape of each component while the bandwidth determines their spread. For example, if we use a Gaussian kernel function, the variance acts as the bandwidth.
Step25: SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution
Step26: Exercise
|
186 | <ASSISTANT_TASK:>
Python Code:
!conda install -qy poliastro --channel poliastro # Instala las dependencias con conda
!pip uninstall poliastro -y
#!pip install -e /home/juanlu/Development/Python/poliastro.org/poliastro
!pip install https://github.com/poliastro/poliastro/archive/planet9-fixes.zip # Instala la versión de desarrollo
%load_ext version_information
%version_information numpy, astropy, scipy, matplotlib, numba, poliastro
%matplotlib inline
import matplotlib
matplotlib.style.use('pybonacci') # https://gist.github.com/Juanlu001/edb2bf7b583e7d56468a
import matplotlib.pyplot as plt
import numpy as np
from astropy import time
from astropy import units as u
from poliastro.bodies import Sun
from poliastro.twobody import angles, State
from poliastro import ephem
from poliastro.plotting import plot, OrbitPlotter
epoch = time.Time("2015-01-24 12:00", scale="utc").tdb
a = 700 * u.AU
ecc=0.6 * u.one
inc=30 * u.deg
raan=100 * u.deg
argp=150 * u.deg
nu=180 * u.deg # ¡Solo para probar!
planet9 = State.from_classical(Sun, a, ecc, inc, raan, argp, nu, # Solo para probar
epoch)
period = planet9.period.to(u.year)
period
plot(planet9)
from matplotlib.patches import Wedge, PathPatch
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# Transformamos las anomalías medias de 90 y 270 grados en
# anomalías verdaderas
nu_lower = angles.M_to_nu(1 * np.pi * u.rad / 2, planet9.ecc)
nu_upper = angles.M_to_nu(3 * np.pi * u.rad / 2, planet9.ecc)
# Regiones equiprobables
fifty_far = Wedge(
(0, 0), planet9.r_a.to(u.km).value,
nu_lower.to(u.deg).value, nu_upper.to(u.deg).value,
color='#cccccc',
zorder=0
)
fifty_close = Wedge(
(0, 0), planet9.r_a.to(u.km).value,
nu_upper.to(u.deg).value, nu_lower.to(u.deg).value,
color='#999999',
zorder=0
)
# Recuperamos la órbita de la Tierra para comparar
r_earth, v_earth = ephem.planet_ephem(ephem.EARTH, epoch)
earth = State.from_vectors(Sun, r_earth.to(u.km), v_earth.to(u.km / u.s), epoch)
# Y ya que nos ponemos, órbita de Neptuno
r_nep, v_nep = ephem.planet_ephem(ephem.PLUTO, epoch)
neptune = State.from_vectors(Sun, r_nep.to(u.km), v_nep.to(u.km / u.s), epoch)
# Creamos la figura
fig, ax = plt.subplots(figsize=(8, 8))
op = OrbitPlotter(ax)
planet9_point, planet9_orbit = op.plot(planet9)
planet9_point.set_color("#6600ff")
planet9_orbit.set_color("#6600ff")
# Enmascaramos los sectores circulares con la órbita
mask = PathPatch(planet9_orbit.get_path(), fc='none', lw=0)
ax.add_patch(mask)
ax.add_patch(fifty_far)
ax.add_patch(fifty_close)
fifty_far.set_clip_path(mask)
fifty_close.set_clip_path(mask)
# Zoom en el sistema Solar
ax_zoom = zoomed_inset_axes(ax, 8, loc=3, axes_kwargs={'axisbg': '#fafafa'})
# Repetimos algunos plots
op_zoom = OrbitPlotter(ax_zoom)
op_zoom.set_frame(*planet9.pqw())
earth_point, earth_orbit = op_zoom.plot(earth)
nepune_point, _ = op_zoom.plot(neptune)
earth_orbit.set_linestyle("solid") # ¡Para que se vea algo!
# Propiedades de la sección aumentada
ax_zoom.set_xlim(-7e9, 5e9)
ax_zoom.set_ylim(-4e9, 5e9)
ax_zoom.set_xticks([])
ax_zoom.set_yticks([])
ax_zoom.set_xlabel("")
ax_zoom.set_ylabel("")
ax_zoom.grid(False)
ax_zoom.set_title("8x zoom")
mark_inset(ax, ax_zoom, loc1=1, loc2=4, fc="none", ec='0.3')
# Leyenda de la gráfica
leg = ax.legend(
[planet9_point, earth_point, nepune_point, fifty_close, fifty_far],
["Planeta 9", "Tierra", "Neptuno", "Perihelio", "Afelio"],
numpoints=1
)
leg.get_frame().set_facecolor('#fafafa')
from poliastro import iod
from poliastro.util import norm
date_launch = time.Time('2016-02-01 12:00', scale='utc').tdb
time_of_flight = 200 * u.year
date_arrival = date_launch + time_of_flight
r_0, v_earth = ephem.planet_ephem(ephem.EARTH, date_launch)
r_f = planet9.propagate(time_of_flight).r
v_0, v_f = iod.lambert(Sun.k, r_0, r_f, time_of_flight, rtol=5)
(norm(v_0 - v_earth)).to(u.km / u.h)
traj1 = State.from_vectors(
Sun,
r_0.to(u.km),
v_0.to(u.km / u.s),
date_launch
)
op = OrbitPlotter(num_points=10000)
op.plot(planet9.propagate(time_of_flight))
#op.plot(earth)
plt.gca().set_autoscale_on(False)
op.plot(traj1)
from poliastro.maneuver import Maneuver
hoh = Maneuver.hohmann(earth, 38e6 * u.km)
hoh.get_total_cost()
interm = earth.apply_maneuver(hoh)
perih = interm.propagate(interm.period / 2)
norm(perih.r)
norm(perih.v)
v_i, _ = iod.lambert(Sun.k, perih.r.to(u.km), planet9.r.to(u.km), 100 * u.year, rtol=12) # De nuevo, tolerancia demasiado grande
norm(v_i)
hoh.get_total_cost() + norm(v_i - perih.v)
op = OrbitPlotter(num_points=10000)
op.plot(earth)
op.plot(interm)
op.plot(perih)
plt.gca().set_autoscale_on(False)
#op.plot(planet9)
op.plot(State.from_vectors(Sun, perih.r, v_i))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La órbita del Planeta Nueve
Step2: Vamos a crear un objeto State para representar Planeta Nueve, añadiendo a los parámetros estimados del artículo un valor de la anomalía verdadera de $180^{\circ}$, es decir
Step3: Habéis leído bien
Step4: (Más abajo la representaremos junto a las órbitas de los planetas conocidos, pero la escala de la gráfica ya da una idea de las distancias que manejamos)
Step5: Para tener algo con lo que comparar vamos a pintar también las órbitas de la Tierra y Neptuno. Para ello poliastro utilizará unos ficheros llamados SPK que contienen información precisa sobre las órbitas de los planetas del sistema solar.
Step6: Para el resto tendremos que jugar un poco con matplotlib y las funciones de plotting que proporciona poliastro.
Step7: En este gráfico se aprecian dos cosas
Step8: Veamos la magnitud de la velocidad de partida
Step9: No es demasiado descabellada, teniendo en cuenta que la velocidad de lanzamiento de New Horizons (que llegó a Plutón en menos de 10 años) fue de casi 60 000 km/h. Veamos ahora esta trayectoria
Step10: Prácticamente en línea recta, ¡como si fuese esto una autopista! Lamentablemente 200 años es demasiado tiempo, y si intentamos reducirlo los requisitos de velocidad solo empeorarían. Veamos otras opción un poco más loca.
Step11: El frenazo que tenemos que pegar es considerable. Viajemos ahora al perihelio y efectuemos la transferencia.
Step12: La diferencia de radios se debe a que no partíamos de una órbita circular. Nos hemos saltado el paso de circularizar la órbita para simplificar.
Step13: Y el requisito total de velocidad será
Step14: Caramba, ¡mucho más alto que antes! Intentamos pintarlo todo
|
187 | <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,201), dataset='mylc')
b.run_compute(irrad_method='none')
afig, mplfig = b['mylc@model'].plot(show=True)
afig, mplfig = b['mylc@model'].plot(x='phases', show=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets
Step3: Running Compute
Step4: Plotting
|
188 | <ASSISTANT_TASK:>
Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz=224
ARCH = vgg16
bs = 16
# Uncomment if need to reset precomputed activations
!rm -rf {PATH}tmp
data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms_from_model(ARCH, sz))
learn = ConvLearner.pretrained(ARCH, data, precompute=True)
learn.fit(0.01, 3, cycle_len=1)
tfms = tfms_from_model(ARCH, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs, num_workers=4)
learn = ConvLearner.pretrained(ARCH, data, precompute=True)
learn.fit(lrs=1e-2, n_cycle=2)
learn.save('vgg16_00') # just in case I run out of memory below
learn.precompute=False
learn.fit(lrs=1e-2, n_cycle=1, cycle_len=1)
learn.save('vgg16_01')
learn.unfreeze()
learn.data.bs = 4 # training ConvNets takes lots of Memory, cut down bs to prevent crashes
lr = np.array([1e-4, 1e-3, 1e-2])
learn.fit(lrs=lr, n_cycle=1, cycle_len=1)
learn.save('vgg16_02')
learn.lr_find()
learn.fit(lrs=lr, n_cycle=3, cycle_len=1, cycle_mult=2)
learn.save('vgg16_03')
learn.fit(lrs=lr, n_cycle=3, cycle_len=3)
learn.save('vgg16_04')
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz=224
ARCH = vgg16
bs = 16
data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms_from_model(ARCH, sz),
test_name = 'test1')
# No reason to precompute activations as I'm running a single prediction run on the test set
# Also, since I trained all ConvLayers earlier... where will it know how to compute
# the activations if I haven't loaded the weights yet?
learn = ConvLearner.pretrained(ARCH, data, precompute=False)
# the test data set
len(learn.data.test_dl.dataset)
learn.load('vgg16_04')
# This took about 32 minutes
log_preds = learn.TTA(n_aug=4, is_test=True)[0]
log_preds_df = pd.DataFrame(log_preds, columns=['dog','cat'])
log_preds_df.to_feather(PATH + 'results/' + 'log_preds')
# log_preds_df = pd.read_feather(PATH + 'results/' + 'log_preds')
test_preds = np.exp(log_preds)
ids = [f[6:-4] for f in learn.data.test_dl.dataset.fnames]
preds = [np.argmax(pred) for pred in test_preds]
submission = pd.DataFrame({'id': ids, 'label': preds})
submission = pd.DataFrame(preds)
submission.columns = ['label']
submission.insert(0, 'id', ids)
submission.head()
submission.to_csv(PATH + 'subm/' + 'submission_vgg16_04.gz', compression='gzip', index=False)
FileLink(PATH + 'subm/' + 'submission_vgg16_04.gz')
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz=224
ARCH = vgg16
bs = 16
data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms_from_model(ARCH, sz),
test_name = 'test1')
# No reason to precompute activations as I'm running a single prediction run on the test set
# Also, since I trained all ConvLayers earlier... where will it know how to compute
# the activations if I haven't loaded the weights yet?
learn = ConvLearner.pretrained(ARCH, data, precompute=False)
learn.load('vgg16_04')
log_preds, y = learn.TTA()
accuracy(log_preds, y)
df = pd.read_feather(PATH + 'results/' + 'log_preds')
df.head()
preds = np.array(list(zip(*(df['dog'],df['cat']))))
preds = np.exp(preds)
preds = [np.argmax(pred) for pred in preds]
new_preds = [int(i==0) for i in preds]
new_preds[:10]
ids = [f[6:-4] for f in learn.data.test_dl.dataset.fnames]
submission = pd.DataFrame({'id': ids, 'label': new_preds})
submission.to_csv(PATH + 'subm/' + 'submission_vgg16_04_wtf.gz', compression='gzip', index=False)
FileLink(PATH + 'subm/' + 'submission_vgg16_04_wtf.gz')
log_preds_df = np.array(list(zip(*(df['dog'],df['cat']))))
test_preds = np.exp(log_preds_df)
test_preds = np.clip(test_preds, 0.05, 0.95)
data.classes
labels = test_preds[:,1]
labels[:10]
learn.data.test_dl.dataset.fnames
ids = [f[6:-4] for f in learn.data.test_dl.dataset.fnames]
submission = pd.DataFrame({'id': ids, 'label': labels})
submission.to_csv(PATH + 'subm/' + 'submission_vgg16_04_omg.csv.gz', compression='gzip', index=False)
FileLink(PATH + 'subm/' + 'submission_vgg16_04_omg.csv.gz')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Testing
Step2: Experimenting with saving as Pandas.DataFrame.to_feather(.)
Step3: Another way to create submission file
Step4: Creating FileLink
Step5:
|
189 | <ASSISTANT_TASK:>
Python Code:
import igraph as ig
import json
data = []
with open('miserables.json') as f:
for line in f:
data.append(json.loads(line))
data=data[0]
data
print data.keys()
N=len(data['nodes'])
N
L=len(data['links'])
Edges=[(data['links'][k]['source'], data['links'][k]['target']) for k in range(L)]
G=ig.Graph(Edges, directed=False)
data['nodes'][0]
labels=[]
group=[]
for node in data['nodes']:
labels.append(node['name'])
group.append(node['group'])
layt=G.layout('kk', dim=3)
layt[5]
Xn=[layt[k][0] for k in range(N)]# x-coordinates of nodes
Yn=[layt[k][1] for k in range(N)]# y-coordinates
Zn=[layt[k][2] for k in range(N)]# z-coordinates
Xe=[]
Ye=[]
Ze=[]
for e in Edges:
Xe+=[layt[e[0]][0],layt[e[1]][0], None]# x-coordinates of edge ends
Ye+=[layt[e[0]][1],layt[e[1]][1], None]
Ze+=[layt[e[0]][2],layt[e[1]][2], None]
import plotly.plotly as py
from plotly.graph_objs import *
trace1=Scatter3d(x=Xe,
y=Ye,
z=Ze,
mode='lines',
line=Line(color='rgb(125,125,125)', width=1),
hoverinfo='none'
)
trace2=Scatter3d(x=Xn,
y=Yn,
z=Zn,
mode='markers',
name='actors',
marker=Marker(symbol='dot',
size=6,
color=group,
colorscale='Viridis',
line=Line(color='rgb(50,50,50)', width=0.5)
),
text=labels,
hoverinfo='text'
)
axis=dict(showbackground=False,
showline=False,
zeroline=False,
showgrid=False,
showticklabels=False,
title=''
)
layout = Layout(
title="Network of coappearances of characters in Victor Hugo's novel<br> Les Miserables (3D visualization)",
width=1000,
height=1000,
showlegend=False,
scene=Scene(
xaxis=XAxis(axis),
yaxis=YAxis(axis),
zaxis=ZAxis(axis),
),
margin=Margin(
t=100
),
hovermode='closest',
annotations=Annotations([
Annotation(
showarrow=False,
text="Data source: <a href='http://bost.ocks.org/mike/miserables/miserables.json'>[1]</a>",
xref='paper',
yref='paper',
x=0,
y=0.1,
xanchor='left',
yanchor='bottom',
font=Font(
size=14
)
)
]), )
data=Data([trace1, trace2])
py.sign_in('empet', 'jkxft90od0')
fig=Figure(data=data, layout=layout)
py.plot(fig, filename='Les-Miserables')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read graph data from a json file
Step2: Get the number of nodes
Step3: Define the list of edges
Step4: Define the Graph object from Edges
Step5: Extract the node attributes, 'group', and 'name'
Step6: Get the node positions, set by the Kamada-Kawai layout for 3D graphs
Step7: layt is a list of three elements lists (the coordinates of nodes)
Step8: Set data for the Plotly plot of the graph
Step9: <div>
|
190 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from plotly.offline import init_notebook_mode,iplot
import plotly.graph_objs as go
%matplotlib inline
init_notebook_mode(connected=True)
import quandl
sp500=quandl.get("YAHOO/INDEX_GSPC",start_date="2000-01-03",end_date="2016-12-09")
sp500.info()
sp500.head()
type(sp500.index)
sp500=sp500.drop(['Open','High','Low','Volume','Adjusted Close'],axis=1)
sp500.head()
trace = go.Scatter(x=sp500.index,
y=sp500['Close'])
data=[trace]
layout = dict(
width=1000,
height=600,
title='Historical levels of the S&P 500 index',
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label='1y',
step='year',
stepmode='backward'),
dict(count=5,
label='5y',
step='year',
stepmode='backward'),
dict(count=10,
label='10y',
step='year',
stepmode='backward'),
dict(step='all')
])
),
rangeslider=dict(),
type='date'
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
sp500['42d']=sp500['Close'].rolling(window=42).mean()
sp500['252d']=sp500['Close'].rolling(window=252).mean()
sp500.tail()
trace1 = go.Scatter(x=sp500.index,
y=sp500['Close'],
name='close')
trace2 = go.Scatter(x=sp500.index,
y=sp500['42d'],
name='42d')
trace3 = go.Scatter(x=sp500.index,
y=sp500['252d'],
name='252d')
data=[trace1,trace2,trace3]
layout = dict(
width=1000,
height=600,
title='The S&P 500 index with 42-day and 252-day trend lines ',
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label='1y',
step='year',
stepmode='backward'),
dict(count=5,
label='5y',
step='year',
stepmode='backward'),
dict(count=10,
label='10y',
step='year',
stepmode='backward'),
dict(step='all')
])
),
rangeslider=dict(),
type='date'
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
sp500['42-252']=sp500['42d']-sp500['252d']
sp500['42-252'].tail()
sp500['Signal']=np.where(sp500['42-252']>50,1,0)
sp500['Signal']=np.where(sp500['42-252']<-50,-1,sp500['Signal'])
sp500['Signal'].value_counts()
figure,ax=plt.subplots()
sp500['Signal'].plot(ax=ax,lw=1.3,fontsize=10,
ylim=[-1.1,1.1],
title='Trading signals over time',
grid=True)
sp500['Market returns']=np.log(sp500['Close']/sp500['Close'].shift(1))
sp500['Strategy returns']=sp500['Signal'].shift(1)*sp500['Market returns']
sp500[['Market returns','Strategy returns']].cumsum().apply(np.exp).tail()
return1 = go.Scatter(x=sp500.index,
y=sp500['Market returns'].cumsum().apply(np.exp),
name='Market')
return2 = go.Scatter(x=sp500.index,
y=sp500['Strategy returns'].cumsum().apply(np.exp),
name='Strategy')
data=[return1,return2]
layout = dict(
width=1000,
height=600,
title='The market returns vs the strategy returns ',
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label='1y',
step='year',
stepmode='backward'),
dict(count=5,
label='5y',
step='year',
stepmode='backward'),
dict(count=10,
label='10y',
step='year',
stepmode='backward'),
dict(step='all')
])
),
rangeslider=dict(),
type='date'
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download data
Step2: Clean data
Step3: Plot the closing quotes over time to get a fisrt impression about the historical market trend by using plotly package. In the following graph, not only can you observe the trend from the start date to the end date but also use the range selectors in the upper left corner and the range slider at the bottom to see the trend in a specific period of time.
Step4: Generate trend lines
Step5: Notice that these two new columns have fewer entries because they start having data only when 42 and 252 observation points, respectively, are available for the first time to calculate the moving average. Then, plot these two trend lines in a single figure with the historical level of S&P 500 index. You can still use the range selectors and the range slider to observe a certain period. Also, a trend line will disappear if you click on the corresponding legend in the upper right corner of the graph. This function makes it easier to get some insights from those upward and downward trends.
Step6: Generate trading signals
Step7: After the differences between the 42-day trend and the 252-day trend being calculated, the trading signals are generated according to the rule. The signal "1" means to have long positions in the index and get the market returns. The signal "0" means not to buy or sell the index and make no returns. The signal "-1" means to go short on the index and get the negative market returns.
Step8: The result shows that from January 3,2000 to December 9,2016, there were 1935 trading days when the 42-day trend was more than 50 points above the 252-day trend. On 950 trading days, the 42-day trend lies more than 50 points below the 252-day trend. The change of signals over time can be seen in the following graph.
Step9: Does the trading strategy perform well?
Step10: Plot the market returns and the returns of the strategy over time to see the performance of the trading strategy constructed on trend lines. As before, You can use the range selectors and the range slider to check whether the strategy works well in a certain period of time.
|
191 | <ASSISTANT_TASK:>
Python Code:
import scipy.io as sio
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import random
from keras.utils import np_utils
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D, AveragePooling2D
from keras.optimizers import SGD, Adadelta, Adagrad
from keras.models import load_model
from random import randint
np.set_printoptions(precision=5, suppress=True)
train_data = sio.loadmat('train_32x32.mat')
test_data = sio.loadmat('test_32x32.mat')
# Cargar set de entrenamiento
X_train = train_data['X'].T
y_train = train_data['y'] - 1
# Cargar set de test
X_test = test_data['X'].T
y_test = test_data['y'] - 1
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# Número de clases
n_classes = len(np.unique(y_train))
# Número de ejemplos
n_train = len(X_train)
n_test = len(X_test)
print "Tamaño imágenes: " , np.shape(X_train[0])[1], "x", np.shape(X_train[0])[2]
print "Número de clases:" , n_classes
print "Número de ejemplos de test:" , n_test
print "Número de ejemplos de entrenamiento:" , n_train
# Número de imágenes a mostrar
n_im = 5
print "Set de Test:"
# Se eligen n elementos al azar
index = random.sample(X_test, n_im)
for i in range(0,len(index)):
ax = plt.subplot(1, n_im, 1+i)
im = index[i].reshape(3,32,32).transpose(2,1,0)
plt.imshow(im)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
print "Set de Entrenamiento:"
# Se eligen n elementos al azar
index = random.sample(X_train, n_im)
for i in range(0,len(index)):
ax = plt.subplot(2, n_im, 1+i)
im = index[i].reshape(3,32,32).transpose(2,1,0)
plt.imshow(im)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Normalizar imágenes
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train, n_classes)
Y_test = np_utils.to_categorical(y_test, n_classes)
(n_channels, n_rows, n_cols) = np.shape(X_train[0])
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', activation='relu',
input_shape=(n_channels, n_rows, n_cols)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(512, 7, 7, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(n_classes, activation='softmax'))
model.summary()
# Se carga el output obtenido para mostrarlo
text_file = open('output_3d', 'r')
output = text_file.read()
print output
def generate_model(cf_size, cp_size):
model = Sequential()
model.add(Convolution2D(16,cf_size,cf_size, border_mode='same',activation='relu',input_shape=(n_channels, n_rows, n_cols)))
model.add(MaxPooling2D(pool_size=(cp_size,cp_size)))
model.add(Convolution2D(512,cf_size,cf_size, border_mode='same',activation='relu'))
model.add(MaxPooling2D(pool_size=(cp_size,cp_size)))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(n_classes, activation='softmax'))
return model
# Tamaño capas convolucionales
cf_size = [3,5,7,9]
# Tamaño capas de pooling
cp_size = [2,4]
for i in cf_size:
for j in cp_size:
model = generate_model(i,j)
#print "Modelo con tamaño capa convolucional: ", i, " y tamaño capa de pooling: ", j
#model.summary()
# Se carga el output obtenido para mostrarlo
text_file = open('output_3e', 'r')
output = text_file.read()
#print output
def generate_model_f(n_filters_l1, n_filters_l2):
(n_channels, n_rows, n_cols) = np.shape(X_train[0])
# Se eligen estos parámetros
cf_size = 5
cp_size = 2
model = Sequential()
model.add(Convolution2D(n_filters_l1,cf_size,cf_size,border_mode='same',activation='relu',
input_shape=(n_channels, n_rows, n_cols)))
model.add(MaxPooling2D(pool_size=(cp_size,cp_size)))
model.add(Convolution2D(n_filters_l2,cf_size,cf_size,border_mode='same',activation='relu'))
model.add(MaxPooling2D(pool_size=(cp_size,cp_size)))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(n_classes, activation='softmax'))
return model
# Se carga el output obtenido para mostrarlo
text_file = open('output_3f', 'r')
output = text_file.read()
#print output
#Cambio de arquitectura, 2 capas convolucionales seguidas de ua capa de pooling
model = Sequential()
model.add(Convolution2D(32, 3, 3,border_mode='same',activation='relu',input_shape=(3,32,32)))
model.add(Convolution2D(32, 3, 3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(n_classes, activation='softmax'))
# Se carga el output obtenido para mostrarlo
text_file = open('output_3g', 'r')
output = text_file.read()
print output
import theano
# Modelo sugerido en e
#np.random.seed(1337) # for reproducibility
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(n_channels, n_rows, n_cols)))
convout1 = Activation('relu')
model.add(convout1)
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(512, 7, 7, border_mode='same'))
convout2 = Activation('relu')
model.add(convout2)
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(n_classes, activation='softmax'))
from keras import backend as K
import pylab as pl
import matplotlib.cm as cm
inputs = [K.learning_phase()] + model.inputs
_convout1_f = K.function(inputs, [convout1.output])
def convout1_f(X):
# The [0] is to disable the training phase flag
return _convout1_f([0] + [X])
# utility functions
from mpl_toolkits.axes_grid1 import make_axes_locatable
def nice_imshow(ax, data, vmin=None, vmax=None, cmap=None):
Wrapper around pl.imshow
if cmap is None:
cmap = cm.jet
if vmin is None:
vmin = data.min()
if vmax is None:
vmax = data.max()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
im = ax.imshow(data, vmin=vmin, vmax=vmax, interpolation='nearest', cmap=cmap)
pl.colorbar(im, cax=cax)
i = 130
# Visualize the first layer of convolutions on an input image
X = X_test[i:i+1]
pl.figure()
pl.title('input')
nice_imshow(pl.gca(), np.squeeze(X.reshape(3,32,32).transpose(2,1,0)), vmin=0, vmax=1, cmap=cm.binary)
plt.show()
X = X_test[i+1:i+2]
pl.figure()
pl.title('input')
nice_imshow(pl.gca(), np.squeeze(X.reshape(3,32,32).transpose(2,1,0)), vmin=0, vmax=1, cmap=cm.binary)
plt.show()
import numpy.ma as ma
def make_mosaic(imgs, nrows, ncols, border=1):
Given a set of images with all the same shape, makes a
mosaic with nrows and ncols
nimgs = imgs.shape[0]
imshape = imgs.shape[1:]
mosaic = ma.masked_all((nrows * imshape[0] + (nrows - 1) * border,
ncols * imshape[1] + (ncols - 1) * border),
dtype=np.float32)
paddedh = imshape[0] + border
paddedw = imshape[1] + border
for i in xrange(nimgs):
row = int(np.floor(i / ncols))
col = i % ncols
mosaic[row * paddedh:row * paddedh + imshape[0],
col * paddedw:col * paddedw + imshape[1]] = imgs[i]
return mosaic
def visualize_weights(W):
plt.figure(figsize=(10, 10))
for ind, val in enumerate(W):
ax = plt.subplot(4, 4, ind+1)
#print val.shape[:2]
#im = val.reshape((5,5))
plt.imshow(val, cmap=cm.binary, interpolation='nearest')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
W = model.layers[0].get_weights()[0]
#W = model.layers[0].W.get_value(borrow=True)
W = np.squeeze(W)
W = W[:,0,:,:]
visualize_weights(W)
# Visualize convolution result (after activation)
# Visualize the first layer of convolutions on an input image
X = X_test[i:i+1]
C1 = convout1_f(X)
C1 = np.squeeze(C1).transpose(0,2,1)
pl.figure(figsize=(10, 10))
nice_imshow(pl.gca(), make_mosaic(C1, 4, 4), cmap=cm.binary)
plt.show()
X = X_test[i+1:i+2]
C1 = convout1_f(X)
C1 = np.squeeze(C1).transpose(0,2,1)
pl.figure(figsize=(10, 10))
nice_imshow(pl.gca(), make_mosaic(C1, 4, 4), cmap=cm.binary)
plt.show()
# Modelo utilizando Dropout
model = Sequential()
model.add(Convolution2D(32, 3, 3,border_mode='same',activation='relu',input_shape=(3,32,32)))
model.add(Convolution2D(32, 3, 3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(n_classes, activation='softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
# Se carga el output obtenido para mostrarlo
text_file = open('output_3i', 'r')
output = text_file.read()
print output
train_data = sio.loadmat('train_32x32.mat')
# Cargar set de entrenamiento
X_train = train_data['X'].T
y_train = train_data['y'] - 1
X_train = X_train.astype('float32')
(n_channels, n_rows, n_cols) = np.shape(X_train[0])
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', activation='relu',
input_shape=(n_channels, n_rows, n_cols)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(512, 7, 7, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(n_classes, activation='softmax'))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: a)
Step2: Se trabajará con SVHN o Street View House Numbers, se puede observar que las imágenes con las que se trabajará, efectivamente pertenecen a un set de números de casas, donde en cada imágen se espera visualizar un número, es por ésto que el número de clases posibles es 10, ya que corresponden a los números de 0 a 9.
Step3: c)
Step4: Se puede observar como cambia la dimensionalidad de la imágen al entrar a la red, con una imágen inicial de 32x32 se reduce a 16x16 y luego a 8x8, después de eso se pasa a un arreglo unidimensional y el output es de 10, lo que indica a que clase pertenece con mayor probabilidad.
Step5: Se puede observar que se alcanza un accuracy de 96,72% lo que supera el valor reportado en la literatura de 94,28%
Step6: Dimensiones
Step7: |Tamaño Capa Convolucional|Tamaño Capa de Pooling|loss|accuracy|Tiempo epoch[s]|
Step8: ```python
Step9: |Filtros Capa 1|Filtros Capa 2|loss|accuracy|Tiempo epoch [s]|
Step10: ```python
Step12: Se puede observar que al modificar la arquitectura de la red, los tiempos de entrenamiento aumentan casi al doble, pero no se observa una mejora en el desempeño de la red, obteniendo 91,89% de accuracy, lo que es mucho menor que el mejor resultado obtenido hasta ahora.
Step14: Se visualizan las imágenes de entrenamiento a usar para visualizar los filtros.
Step15: Se visualizan los pesos de la red, los cuales no pueden ser interpretados debido a su baja dimensionalidad.
Step16: Al visualizar los filtros de la primera capa si se puede reconocer que algunos intentan visualizar bordes y quizas formas, sombras, entre otros. Pero son interpretables.
Step17: ```python
Step18: El modelo utilizando Dropout mejora mucho el resultado comparado al modelo similar sin Dropout visto en g), obteniendo un accuracy de 97,56% que es el más alto obtenido hasta ahora y un loss de 0.0773 lo cual es muy bajo y bueno.
|
192 | <ASSISTANT_TASK:>
Python Code:
reviews = gl.SFrame.read_csv('../data/yelp/yelp_training_set_review.json', header=False)
reviews
reviews[0]
reviews=reviews.unpack('X1','')
reviews
reviews = reviews.unpack('votes', '')
reviews
reviews.show()
reviews['date'] = reviews['date'].str_to_datetime(str_format='%Y-%m-%d')
reviews['total_votes'] = reviews['funny'] + reviews['cool'] + reviews['useful']
reviews
reviews['total_votes'] > 0
reviews = reviews[reviews['total_votes'] > 0]
reviews
reviews['funny'] = reviews['funny'] > 0
reviews = reviews[['text','funny']]
reviews
reviews = reviews[:10000]
word_delims = ["\r", "\v", "\n", "\f", "\t", " ",
'~', '`', '!', '@', '#', '$', '%', '^', '&', '*', '-', '_', '+', '=',
',', '.', ';', ':', '\"', '?', '|', '\\', '/',
'<', '>', '(', ')', '[', ']', '{', '}']
reviews['bow'] = gl.text_analytics.count_words(reviews['text'], delimiters=word_delims)
reviews['tf_idf'] = gl.text_analytics.tf_idf(reviews['bow'])
reviews['tf_idf'] = reviews['tf_idf'].apply(lambda x: x['docs'])
reviews
train_sf, test_sf = reviews.random_split(0.8)
m1 = gl.logistic_classifier.create(train_sf,
'funny',
features=['bow'],
validation_set=None,
feature_rescaling=False)
m2 = gl.logistic_classifier.create(train_sf,
'funny',
features=['tf_idf'],
validation_set=None,
feature_rescaling=False)
m1_res = m1.evaluate(test_sf)
m1_res
m2_res = m2.evaluate(test_sf)
m2_res
float(test_sf['funny'].sum())/test_sf.num_rows()
1.0 - float(test_sf['funny'].sum())/test_sf.num_rows()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unpack to extract structure
Step2: Votes are still crammed in a dictionary. Let's unpack it.
Step3: Quick data visualization
Step4: Represent datetime
Step5: Munge votes and add a new column
Step6: Filter rows to remove reviews with no votes
Step7: Classifiction task
Step8: To save time, take just a small subset
Step9: Create bag-of-words representation of text
Step10: Create tf-idf representation of the bag of words
Step11: Create a train-test split
Step12: Train classifiers on bow and tf-idf
Step13: Evaluate on validation set and compare performance
Step14: Baseline accuracy (what if we classify everything as the majority class)
Step15: Percentage of not funny reviews
|
193 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q -U tensorflow-text
!pip install -q tf-models-official==2.4.0
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
# Load the required submodules
import official.nlp.optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12"
tf.io.gfile.listdir(gs_folder_bert)
hub_url_bert = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3"
glue, info = tfds.load('glue/mrpc', with_info=True,
# It's small, load the whole dataset
batch_size=-1)
list(glue.keys())
info.features
info.features['label'].names
glue_train = glue['train']
for key, value in glue_train.items():
print(f"{key:9s}: {value[0].numpy()}")
# Set up tokenizer to generate Tensorflow dataset
tokenizer = bert.tokenization.FullTokenizer(
vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
do_lower_case=True)
print("Vocab size:", len(tokenizer.vocab))
tokens = tokenizer.tokenize("Hello TensorFlow!")
print(tokens)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
def encode_sentence(s):
tokens = list(tokenizer.tokenize(s.numpy()))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
sentence1 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence1"]])
sentence2 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence2"]])
print("Sentence1 shape:", sentence1.shape.as_list())
print("Sentence2 shape:", sentence2.shape.as_list())
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
_ = plt.pcolormesh(input_word_ids.to_tensor())
input_mask = tf.ones_like(input_word_ids).to_tensor()
plt.pcolormesh(input_mask)
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor()
plt.pcolormesh(input_type_ids)
def encode_sentence(s, tokenizer):
tokens = list(tokenizer.tokenize(s))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
def bert_encode(glue_dict, tokenizer):
num_examples = len(glue_dict["sentence1"])
sentence1 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence1"])])
sentence2 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence2"])])
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
input_mask = tf.ones_like(input_word_ids).to_tensor()
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat(
[type_cls, type_s1, type_s2], axis=-1).to_tensor()
inputs = {
'input_word_ids': input_word_ids.to_tensor(),
'input_mask': input_mask,
'input_type_ids': input_type_ids}
return inputs
glue_train = bert_encode(glue['train'], tokenizer)
glue_train_labels = glue['train']['label']
glue_validation = bert_encode(glue['validation'], tokenizer)
glue_validation_labels = glue['validation']['label']
glue_test = bert_encode(glue['test'], tokenizer)
glue_test_labels = glue['test']['label']
for key, value in glue_train.items():
print(f'{key:15s} shape: {value.shape}')
print(f'glue_train_labels shape: {glue_train_labels.shape}')
import json
bert_config_file = os.path.join(gs_folder_bert, "bert_config.json")
config_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read())
bert_config = bert.configs.BertConfig.from_dict(config_dict)
config_dict
bert_classifier, bert_encoder = bert.bert_models.classifier_model(
bert_config, num_labels=2)
tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48)
glue_batch = {key: val[:10] for key, val in glue_train.items()}
bert_classifier(
glue_batch, training=True
).numpy()
tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48)
checkpoint = tf.train.Checkpoint(encoder=bert_encoder)
checkpoint.read(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
# Set up epochs and steps
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
warmup_steps = int(epochs * train_data_size * 0.1 / batch_size)
# creates an optimizer with learning rate schedule
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
type(optimizer)
metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)]
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
bert_classifier.compile(
optimizer=optimizer,
loss=loss,
metrics=metrics)
bert_classifier.fit(
glue_train, glue_train_labels,
validation_data=(glue_validation, glue_validation_labels),
batch_size=32,
epochs=epochs)
my_examples = bert_encode(
glue_dict = {
'sentence1':[
'The rain in Spain falls mainly on the plain.',
'Look I fine tuned BERT.'],
'sentence2':[
'It mostly rains on the flat lands of Spain.',
'Is it working? This does not match.']
},
tokenizer=tokenizer)
result = bert_classifier(my_examples, training=False)
result = tf.argmax(result).numpy()
result
np.array(info.features['label'].names)[result]
export_dir='./saved_model'
tf.saved_model.save(bert_classifier, export_dir=export_dir)
reloaded = tf.saved_model.load(export_dir)
reloaded_result = reloaded([my_examples['input_word_ids'],
my_examples['input_mask'],
my_examples['input_type_ids']], training=False)
original_result = bert_classifier(my_examples, training=False)
# The results are (nearly) identical:
print(original_result.numpy())
print()
print(reloaded_result.numpy())
processor = nlp.data.classifier_data_lib.TfdsProcessor(
tfds_params="dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2",
process_text_fn=bert.tokenization.convert_to_unicode)
# Set up output of training and evaluation Tensorflow dataset
train_data_output_path="./mrpc_train.tf_record"
eval_data_output_path="./mrpc_eval.tf_record"
max_seq_length = 128
batch_size = 32
eval_batch_size = 32
# Generate and save training data into a tf record file
input_meta_data = (
nlp.data.classifier_data_lib.generate_tf_record_from_data_file(
processor=processor,
data_dir=None, # It is `None` because data is from tfds, not local dir.
tokenizer=tokenizer,
train_data_output_path=train_data_output_path,
eval_data_output_path=eval_data_output_path,
max_seq_length=max_seq_length))
training_dataset = bert.run_classifier.get_dataset_fn(
train_data_output_path,
max_seq_length,
batch_size,
is_training=True)()
evaluation_dataset = bert.run_classifier.get_dataset_fn(
eval_data_output_path,
max_seq_length,
eval_batch_size,
is_training=False)()
training_dataset.element_spec
def create_classifier_dataset(file_path, seq_length, batch_size, is_training):
Creates input dataset from (tf)records files for train/eval.
dataset = tf.data.TFRecordDataset(file_path)
if is_training:
dataset = dataset.shuffle(100)
dataset = dataset.repeat()
def decode_record(record):
name_to_features = {
'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64),
'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'label_ids': tf.io.FixedLenFeature([], tf.int64),
}
return tf.io.parse_single_example(record, name_to_features)
def _select_data_from_record(record):
x = {
'input_word_ids': record['input_ids'],
'input_mask': record['input_mask'],
'input_type_ids': record['segment_ids']
}
y = record['label_ids']
return (x, y)
dataset = dataset.map(decode_record,
num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.map(
_select_data_from_record,
num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.batch(batch_size, drop_remainder=is_training)
dataset = dataset.prefetch(tf.data.AUTOTUNE)
return dataset
# Set up batch sizes
batch_size = 32
eval_batch_size = 32
# Return Tensorflow dataset
training_dataset = create_classifier_dataset(
train_data_output_path,
input_meta_data['max_seq_length'],
batch_size,
is_training=True)
evaluation_dataset = create_classifier_dataset(
eval_data_output_path,
input_meta_data['max_seq_length'],
eval_batch_size,
is_training=False)
training_dataset.element_spec
# Note: 350MB download.
import tensorflow_hub as hub
hub_model_name = "bert_en_uncased_L-12_H-768_A-12" #@param ["bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_en_uncased_L-12_H-768_A-12", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_zh_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12"]
hub_encoder = hub.KerasLayer(f"https://tfhub.dev/tensorflow/{hub_model_name}/3",
trainable=True)
print(f"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables")
result = hub_encoder(
inputs=dict(
input_word_ids=glue_train['input_word_ids'][:10],
input_mask=glue_train['input_mask'][:10],
input_type_ids=glue_train['input_type_ids'][:10],),
training=False,
)
print("Pooled output shape:", result['pooled_output'].shape)
print("Sequence output shape:", result['sequence_output'].shape)
hub_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=0.1,
initializer=tf.keras.initializers.TruncatedNormal(
stddev=0.02))
tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64)
try:
tf.keras.utils.plot_model(hub_encoder, show_shapes=True, dpi=64)
assert False
except Exception as e:
print(f"{type(e).__name__}: {e}")
bert_encoder_config = config_dict.copy()
# You need to rename a few fields to make this work:
bert_encoder_config['attention_dropout_rate'] = bert_encoder_config.pop('attention_probs_dropout_prob')
bert_encoder_config['activation'] = tf_utils.get_activation(bert_encoder_config.pop('hidden_act'))
bert_encoder_config['dropout_rate'] = bert_encoder_config.pop('hidden_dropout_prob')
bert_encoder_config['initializer'] = tf.keras.initializers.TruncatedNormal(
stddev=bert_encoder_config.pop('initializer_range'))
bert_encoder_config['max_sequence_length'] = bert_encoder_config.pop('max_position_embeddings')
bert_encoder_config['num_layers'] = bert_encoder_config.pop('num_hidden_layers')
bert_encoder_config
manual_encoder = nlp.modeling.networks.BertEncoder(**bert_encoder_config)
checkpoint = tf.train.Checkpoint(encoder=manual_encoder)
checkpoint.read(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
result = manual_encoder(my_examples, training=True)
print("Sequence output shape:", result[0].shape)
print("Pooled output shape:", result[1].shape)
manual_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=bert_encoder_config['dropout_rate'],
initializer=bert_encoder_config['initializer'])
manual_classifier(my_examples, training=True).numpy()
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=2e-5,
decay_steps=num_train_steps,
end_learning_rate=0)
plt.plot([decay_schedule(n) for n in range(num_train_steps)])
warmup_steps = num_train_steps * 0.1
warmup_schedule = nlp.optimization.WarmUp(
initial_learning_rate=2e-5,
decay_schedule_fn=decay_schedule,
warmup_steps=warmup_steps)
# The warmup overshoots, because it warms up to the `initial_learning_rate`
# following the original implementation. You can set
# `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the
# overshoot.
plt.plot([warmup_schedule(n) for n in range(num_train_steps)])
optimizer = nlp.optimization.AdamWeightDecay(
learning_rate=warmup_schedule,
weight_decay_rate=0.01,
epsilon=1e-6,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fine-tuning a BERT model
Step2: Imports
Step3: Resources
Step4: You can get a pre-trained BERT encoder from TensorFlow Hub
Step5: The data
Step6: The info object describes the dataset and it's features
Step7: The two classes are
Step8: Here is one example from the training set
Step9: The BERT tokenizer
Step10: Tokenize a sentence
Step11: Preprocess the data
Step12: Start by encoding all the sentences while appending a [SEP] token, and packing them into ragged-tensors
Step13: Now prepend a [CLS] token, and concatenate the ragged tensors to form a single input_word_ids tensor for each example. RaggedTensor.to_tensor() zero pads to the longest sequence.
Step14: Mask and input type
Step15: The "input type" also has the same shape, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of.
Step16: Put it all together
Step17: Each subset of the data has been converted to a dictionary of features, and a set of labels. Each feature in the input dictionary has the same shape, and the number of labels should match
Step18: The model
Step19: The config defines the core BERT Model, which is a Keras model to predict the outputs of num_classes from the inputs with maximum sequence length max_seq_length.
Step20: The classifier has three inputs and one output
Step21: Run it on a test batch of data 10 examples from the training set. The output is the logits for the two classes
Step22: The TransformerEncoder in the center of the classifier above is the bert_encoder.
Step23: Restore the encoder weights
Step24: Note
Step25: This returns an AdamWeightDecay optimizer with the learning rate schedule set
Step26: To see an example of how to customize the optimizer and it's schedule, see the Optimizer schedule appendix.
Step27: Now run the fine-tuned model on a custom example to see that it works.
Step28: The model should report class 1 "match" for the first example and class 0 "no-match" for the second
Step29: Save the model
Step30: Appendix
Step31: Then apply the transformation to generate new TFRecord files.
Step32: Finally create tf.data input pipelines from those TFRecord files
Step33: The resulting tf.data.Datasets return (features, labels) pairs, as expected by keras.Model.fit
Step35: Create tf.data.Dataset for training and evaluation
Step36: <a id="hub_bert"></a>
Step37: Test run it on a batch of data
Step38: At this point it would be simple to add a classification head yourself.
Step39: The one downside to loading this model from TFHub is that the structure of internal keras layers is not restored. So it's more difficult to inspect or modify the model. The BertEncoder model is now a single layer
Step40: <a id="model_builder_functions"></a>
Step41: Restore the weights
Step42: Test run it
Step43: Wrap it in a classifier
Step44: <a id="optimizer_schedule"></a>
Step45: That high level wrapper sets up the learning rate schedules and the optimizer.
Step46: This, in turn is wrapped in a WarmUp schedule that linearly increases the learning rate to the target value over the first 10% of training
Step47: Then create the nlp.optimization.AdamWeightDecay using that schedule, configured for the BERT model
|
194 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import json
import codecs
import warnings
import matplotlib.pyplot as plt
%matplotlib inline
race_metadata = pd.read_csv('~/election-twitter/elections-twitter/data/race-metadata.csv')
race_metadata_2016 = pd.read_csv('~/election-twitter/elections-twitter/data/race-metadata-2016.csv')
race_metadata.head()
race_metadata_2016.head()
## How many races in the train and test sets?
race_metadata.shape[0], race_metadata_2016.shape[0]
## put a column in that grabs the winner out
race_metadata['winner'] = race_metadata.Result.apply(lambda x: json.loads(x)[0][0])
race_metadata_2016['winner'] = race_metadata_2016.Result.apply(lambda x: json.loads(x)[0][0])
race_metadata.head()
race_metadata_2016.head()
## how many candidates in each race in the train set?
race_metadata.Result.apply(lambda x: len(json.loads(x))).describe()
## how many candidates in each race in the test set?
race_metadata_2016.Result.apply(lambda x: len(json.loads(x))).describe()
def make_ascii(s):
return s.encode('ascii','ignore').decode('ascii')
def make_df(race_metadata,year=2014):
values = []
path = '/Users/adamwlevin/election-twitter/elections-twitter/data/tweets'
if year==2016:
path += '/t2016'
for row_ind, row in race_metadata.iterrows():
try:
with codecs.open('%s/%s.json' % (path,make_ascii(row.Race).replace(' ',''),),'r','utf-8-sig') as f:
tweets = json.load(f)
except FileNotFoundError:
print('Did not find %s ' % (row.Race,))
continue
for candidate,data in tweets.items():
if candidate in ('–','Blank/Void/Scattering','Write-Ins','Others'):
continue
record = [[]]*4
for date,data_ in data.items():
if data_ and data_!='Made 5 attempts, all unsucessful.':
data_ = np.array(data_)
for i in range(4):
record[i] = \
np.concatenate([record[i],data_[:,i].astype(int) if i!=0 else data_[:,i]])
values.append([candidate]+record+[1 if candidate==row.winner else 0,row_ind])
return pd.DataFrame(values,columns=['candidate','tweets','replies',
'retweets','favorites',
'winner','race_index'])
## make the train set and test set
df_train = make_df(race_metadata)
df_test = make_df(race_metadata_2016,year=2016)
## take a look at the result
df_train.head()
df_test.head()
## who has the most tweets of the 2016 candidates?
df_test.loc[df_test.tweets.apply(len).idxmax()]
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from xgboost import XGBClassifier
from sklearn.model_selection import StratifiedKFold
## This is useful for selecting a subset of features in the middle of a Pipeline
class ItemSelector(BaseEstimator, TransformerMixin):
def __init__(self, keys, ndim):
self.keys = keys
self.ndim = ndim
def fit(self, x, y=None):
return self
def transform(self, data_dict):
res = data_dict[self.keys]
return res
## Making some features about the text itself
class TweetTextMetadata(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def fit(self, x, y=None):
return self
def transform(self, docs):
ave_words_per_tweet = [sum(len(tweet.split(' '))
for tweet in tweets)\
/len(tweets)
if len(tweets) else 0
for tweets in docs]
total_number_words = [sum(len(tweet.split(' '))
for tweet in tweets)
for tweets in docs]
ave_word_len = [sum(len(word) for tweet in tweets
for word in tweet.split(' '))/\
sum(1 for tweet in tweets
for word in tweet.split(' '))
if len(tweets) else 0 for tweets in docs]
total_periods = [sum(tweet.count('.')
for tweet in tweets)
for tweets in docs]
total_q_marks = [sum(tweet.count('?')
for tweet in tweets)
for tweets in docs]
return np.column_stack([value
for key,value in locals().items()
if isinstance(value,list)])
names = ['ave_words_per_tweet','total_number_words','ave_word_len','total_periods','total_q_marks']
## Making some features about the favorites, retweets, etc.
class TweetStats(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def fit(self, x, y=None):
return self
def transform(self, df):
warnings.filterwarnings("ignore",
message="Mean of empty slice.")
total_replies = df.replies.apply(sum)
total_retweets = df.retweets.apply(sum)
total_favorites = df.favorites.apply(sum)
num_tweets = df.replies.apply(len)
ave_replies_per_tweet = df.replies.apply(np.mean).fillna(0)
ave_retweets_per_tweet = df.retweets.apply(np.mean).fillna(0)
ave_favorites_per_tweet = df.favorites.apply(np.mean).fillna(0)
ninety_eighth_percentile_replies = df.replies.apply(lambda x: np.percentile(x,98.) if len(x) else 0.)
ninety_eighth_percentile_retweets = df.retweets.apply(lambda x: np.percentile(x,98.) if len(x) else 0.)
ninety_eighth_percentile_favorites = df.favorites.apply(lambda x: np.percentile(x,98.) if len(x) else 0.)
return np.column_stack([value.values for key,value in locals().items() if isinstance(value,pd.Series)])
names = ['total_replies','total_retweets','total_favorites',
'num_tweets','ave_replies_per_tweet','ave_retweets_per_tweet',
'ave_favorites_per_tweet','ninety_eighth_percentile_replies',
'ninety_eighth_percentile_retweets',
'ninety_eighth_percentile_favorites']
## This inherits a TfidfVectorizer and just cleans the tweets a little before vectorizing them
## (this is probably unnecessary but haven't tested)
class CustomTfidfVectorizer(TfidfVectorizer):
def cleanse_tweets(self,tweets):
return ' '.join([word for tweet in tweets
for word in tweet.split(' ')
if 'http://' not in word
and 'www.' not in word
and '@' not in word
and 'https://' not in word
and '.com' not in word
and '.net' not in word])
def fit(self, x, y=None):
return super().fit(x.apply(self.cleanse_tweets).values)
def transform(self, x):
return super().transform(x.apply(self.cleanse_tweets).values)
def fit_transform(self, x, y=None):
self.fit(x,y)
return self.transform(x)
## This takes in a XGBClassifier and finds the optimal number of trees using CV
def get_num_trees(clf,X,y,cv,eval_metric='logloss',early_stopping_rounds=10):
n_trees = []
for train,test in cv.split(X,y):
clf.fit(X[train], y[train],
eval_set=[[X[test],y[test]]],
eval_metric=eval_metric,
early_stopping_rounds=early_stopping_rounds,
verbose=False)
n_trees.append(clf.best_iteration)
print('Number of trees selected: %d' % \
(int(sum(n_trees)/len(n_trees)),))
return int(sum(n_trees)/len(n_trees))
names = [name_.lower() for result in race_metadata.Result
for name,_,_ in json.loads(result) for name_ in name.split()]
stop_words = names + list(ENGLISH_STOP_WORDS)
## I did grid search some of the below hyperparameters using grouped CV
features = FeatureUnion(
[
('tfidf',Pipeline([
('selector',ItemSelector(keys='tweets',ndim=1)),
('tfidf',CustomTfidfVectorizer(use_idf=False,
stop_words=stop_words,
ngram_range=(1,1),
min_df=.05))
])),
('tweet_metadata',Pipeline([
('selector',ItemSelector(keys='tweets',ndim=1)),
('metadata_extractor',TweetTextMetadata())
])),
('tweet_stats',Pipeline([
('selector',ItemSelector(keys=['replies','retweets',
'favorites'],
ndim=2)),
('tweet_stats_extractor',TweetStats())
]))
])
clf = XGBClassifier(learning_rate=.01,n_estimators=100000,
subsample=.9,max_depth=2)
## make train matrix, fit model on train set
X = features.fit_transform(df_train[['tweets','replies',
'retweets','favorites']])
y = df_train['winner'].values
cv = StratifiedKFold(n_splits=6,shuffle=True)
n_estimators = get_num_trees(clf,X,y,cv)
clf.n_estimators = n_estimators
clf.fit(X,y)
feature_names = sorted(['WORD_%s' % (word,)
for word in features.get_params()['tfidf'].get_params()['tfidf'].vocabulary_.keys()]) +\
TweetTextMetadata.names +\
TweetStats.names
## print top 10 importances and their names
importances = clf.feature_importances_
importances = {u:val for u,val in enumerate(importances)}
for ind in sorted(importances,key=importances.get,reverse=True)[:10]:
print(feature_names[ind],importances[ind])
preds = clf.predict_proba(X)[:,1]
## put the raw predictions in the dataframe so we can use df.groupy
df_train['pred_raw'] = preds
df_train.head()
## get dictionaries mappying race index to index of predicted and true winners
preds = df_train.groupby('race_index').pred_raw.apply(lambda x: x.idxmax()).to_dict()
true = df_train.groupby('race_index').winner.apply(lambda x: x.idxmax()).to_dict()
## get train accuracy on race
acc = np.mean([preds[race_ind]==true[race_ind] for race_ind in df_train.race_index.unique()])
acc
## get test matrix and predictions
X_test = features.transform(df_test[['tweets','replies','retweets','favorites']])
preds_test = clf.predict_proba(X_test)[:,1]
## make a plot
fig,ax = plt.subplots(1,1,figsize=(13,5))
plt.hist(preds_test[(df_test.winner==1).values],alpha=.5,
label='predictions for winners');
plt.hist(preds_test[(df_test.winner==0).values],alpha=.5,
label='predictions for non-winners');
plt.legend();
plt.title('Test Set Predictions');
## put the raw predictions in the test dataframe so we can use df.groupy
df_test['pred_raw'] = preds_test
## get dictionaries mappying race index to index of predicted and true winners, this time on test set
preds_test = df_test.groupby('race_index').pred_raw.apply(lambda x: x.idxmax()).to_dict()
true_test = df_test.groupby('race_index').winner.apply(lambda x: x.idxmax()).to_dict()
## get test accuracy on race level
acc = np.mean([preds_test[race_ind]==true_test[race_ind] for race_ind in df_test.race_index.unique()])
acc
df_test[~df_test.winner.astype(bool)].sort_values('pred_raw',ascending=False).head(1)
## take a look at 30 of the tweets for Sarah Lloyd
df_test[~df_test.winner.astype(bool)].sort_values('pred_raw',ascending=False).tweets.iloc[0][0:30]
## the race Lloyd lost
df_test[df_test.race_index==151]
print(race_metadata_2016.loc[151])
print(race_metadata_2016.loc[151].Result)
df_test[df_test.winner.astype(bool)].sort_values('pred_raw').head(1)
## this race
df_test[df_test.race_index==69]
print(race_metadata_2016.loc[69])
print(race_metadata_2016.loc[69].Result)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The below is the same code as last post with a couple modifications
Step2: Now, I will do the same model building procedure as last time. There are three classes of features
Step3: Looking at this feature importances a second time, it looks like the words chosen as important are proxies for whether the candidate is the incumbent. This makes sense, from the little that I know about politics.
Step4: Now let's test the model on the 2016 races to see how it performs. I will produce the features using the same function as earlier, make the predictions using the trained model and then make a plot and compute the accuracy.
Step5: 88% accuracy is not that bad! Considering I used nothing but tweets and imbued no prior knowledge.
Step6: So this looks like a race where Democratic enthusiasm (or twitter activism) was high but the Republican won. It could also have a little to do with the fact that Sarah Lloyd is also the name of a British travel writer but not sure.
|
195 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from matplotlib.ticker import MultipleLocator
%matplotlib notebook
def pixel_plot(pix, counts, fig=None, ax=None):
'''Make a pixelated 1D plot'''
if fig is None and ax is None:
fig, ax = plt.subplots()
ax.step(pix, counts,
where='post')
ax.set_xlabel('pixel number')
ax.set_ylabel('relative counts')
ax.xaxis.set_minor_locator(MultipleLocator(1))
ax.xaxis.set_major_locator(MultipleLocator(5))
fig.tight_layout()
return fig, ax
# Define your PSF function phi()
# It is sufficient to copy and paste from
# your introductionToBasicStellarPhotometry noteboook
def phi(x, mu, fwhm):
Evalute the 1d PSF N(mu, sigma^2) along x
Parameters
----------
x : array-like of shape (n_pixels,)
detector pixel number
mu : float
mean position of the 1D star
fwhm : float
Full-width half-maximum of the stellar profile on the detector
Returns
-------
flux : array-like of shape (n_pixels,)
Flux in each pixel of the input array
sigmaPerFwhm = 2*np.sqrt(2*np.log(2))
sigma = fwhm/sigmaPerFwhm
flux = norm.pdf(x, mu, sigma)
return flux
# Define your image simulation function to
# It is sufficient to copy and paste from
# your introductionToBasicStellarPhotometry noteboook
# Note that the background S should now be supplied as
# an array of length (S) or a constant.
def simulate(x, mu, fwhm, S, F):
simulate a noisy stellar signal
Parameters
----------
x : array-like
detector pixel number
mu : float
mean position of the 1D star
fwhm : float
Full-width half-maximum of the stellar profile on the detector
S : float or array-like of len(x)
Sky background for each pixel
F : float
Total stellar flux
Returns
-------
noisy_counts : array-like (same shape as x)
the (noisy) number of counts in each pixel
signal = F * phi(x=x, mu=mu, fwhm=fwhm) + S
noise = np.random.normal(loc=0, scale=np.sqrt(signal))
noisy_counts = signal + noise
return noisy_counts
# simulate the star
x = np.arange(100)
mu = 35
S = 100
fwhm = 5
F = 500
fig = plt.figure(figsize=(8,4))
ax = plt.subplot()
sim_star = simulate(x, mu=mu, fwhm=fwhm, S=S, F=F)
pixel_plot(x, sim_star, fig=fig, ax=ax)
# plot and inspect histogram
fig = plt.figure(figsize=(6,4))
plt.hist(sim_star, bins=20)
plt.xlabel('image counts')
plt.ylabel('num pixels')
S_estimate = np.median(sim_star)
plt.axvline(S_estimate, color='red')
plt.axvline(np.mean(sim_star), color='orange')
print('My background estimate = {:.4f}'.format(S_estimate))
print('The mean pixel count = {:.4f}'.format(np.mean(sim_star)))
# plot your background model over the "image"
fig, ax = pixel_plot(x, sim_star)
pixel_plot(x, np.repeat(S_estimate, len(x)), fig=fig, ax=ax)
# Double check that your simulate function can take S optionally as array-like
# Create and plot the image with S = 3*x + 100
S = S=(3*x + 100)
sim_star = simulate(x=x, mu=mu, fwhm=fwhm, S=S, F=F)
pixel_plot(x, sim_star)
# bin the image in 20-pixel bins
# complete
BIN_SIZE = 20
bins = np.arange(0, 100 + BIN_SIZE, BIN_SIZE)
bin_centers = 0.5 *(bins[0:-1] + bins[1:])
digitized = np.digitize(x, bins=bins)
bin_values = [np.median(sim_star[digitized == i]) for i in range(1, len(bins))]
# Fit the bin_values vs bin_centers with a 1st-order chebyshev polynomial
# Evaluate your model for the full image
# hint: look up np.polynomial.chebyshev.chebfit and np.polynomial.chebyshev.chebeval
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 1)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
# Replot the image:
fig, ax = pixel_plot(x, sim_star)
# binned values
ax.plot(bin_centers, bin_values, 'o')
# Overplot your background model:
ax.plot(x, bg, '-')
# Finally plot your background subtracted image:
fig, ax = pixel_plot(x, sim_star - bg)
SIGMA_PER_FWHM = 2*np.sqrt(2*np.log(2))
fwhm = 5
x = np.arange(100)
background = 1000*norm.pdf(x, 50, 18) + 100*norm.pdf(x, 20, fwhm/SIGMA_PER_FWHM) + 100*norm.pdf(x, 60, fwhm/SIGMA_PER_FWHM)
sim_star3 = simulate(x=x, mu=35, fwhm=fwhm, S=background, F=200)
fig, ax = pixel_plot(x, sim_star3)
BIN_SIZE = 10
bins = np.arange(0, 100 + BIN_SIZE, BIN_SIZE)
bin_centers = 0.5 *(bins[0:-1] + bins[1:])
print(bin_centers)
digitized = np.digitize(x, bins=bins)
bin_values = [np.median(sim_star3[digitized == i]) for i in range(1, len(bins))]
# overplot the binned esimtates:
fig, ax = pixel_plot(x, sim_star3)
ax.plot(bin_centers, bin_values, 'o')
fig, ax = pixel_plot(x, sim_star3)
ax.plot(bin_centers, bin_values, 'o')
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 2)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
ax.plot(x, bg, '-', label='order=2')
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 3)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
ax.plot(x, bg, '-', label='order=3')
ax.legend()
# Plot the background subtracted image
fig, ax = pixel_plot(x, sim_star3 - bg)
# set up simulation
x = np.arange(100)
mu = 35
S = 100
fwhm = 5
F = 300
fig = plt.figure(figsize=(8,4))
ax = plt.subplot()
sim_star = simulate(x, mu=mu, fwhm=fwhm, S=S, F=F)
# To simplify this pretend we know for sure the background = 100
# Plots the backround subtracted image
image = sim_star - 100
pixel_plot(x, image, fig=fig, ax=ax)
xx = np.arange(-7, 8)
kernel = phi(xx, mu=0, fwhm=5)
pixel_plot(xx, kernel)
print(xx)
print(kernel)
import scipy.signal
size = len(kernel)//2
detection_image = scipy.signal.convolve(image, kernel, mode='same')
# mode='same' pads then clips the padding This is the same as:
# size = len(kernel)//2
# scipy.signal.convolve(image, kernel)[size:-size]
# Note: pay attention to how scipy.signal.convolve handles the edges.
pixel_plot(x, detection_image)
print(len(scipy.signal.convolve(image, kernel, mode='full')))
print(len(scipy.signal.convolve(image, kernel, mode='same')))
print(len(scipy.signal.convolve(image, kernel, mode='valid')))
# Using a robust estimator for the detection image standard deviation,
# Compute the 5 sigma threshold
N_SIGMA = 5
q1, q2 = np.percentile(detection_image, (30.85, 69.15))
std = q2 - q1
threshold_value = std * N_SIGMA
print('5 sigma threshold value is = {:.4f}'.format(threshold_value))
qq = scipy.stats.probplot(detection_image, dist="norm")
plt.plot(qq[0][0], qq[0][1])
plt.ylabel('value')
plt.xlabel('Normal quantiles')
# Just for fun to see what's going on:
fig, ax = pixel_plot(x, detection_image)
plt.axhline(threshold_value)
# complete
growBy = fwhm
mask = detection_image > threshold_value
print(np.count_nonzero(mask))
dilated_mask = scipy.ndimage.binary_dilation(mask, iterations=growBy)
print(np.count_nonzero(dilated_mask))
fig, ax = pixel_plot(x[dilated_mask], image[dilated_mask])
# easy aperture flux:
np.sum(image[dilated_mask])
# Copied from solutions of previous notebook
from scipy.optimize import minimize
psf = phi(x[dilated_mask], mu=35, fwhm=5)
im = image[dilated_mask]
# minimize the square of the residuals to determine flux
def sum_res(A, flux=im, model=psf):
return sum((flux - A*model)**2)
sim_star = simulate(x, mu, fwhm, S, F)
psf_flux = minimize(sum_res, 300, args=(im, psf))
print("The PSF flux is {:.3f}".format(psf_flux.x[0]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Background Subtraction and Source Detection
Step3: Problem 1) Simple 1-D Background Estimation
Step4: Problem 1.2) Estimate a the background as a ramp/line (order = 1)
Step5: Problem 1.3) Estimate a more realistic background (still in 1D)
Step6: 1.3.1) Bin the image. Plot the bin centers. What bin size did you pick?
Step7: 1.3.2) Spatially model the binned estimates (bin_values vs bin_centers) as a chebyshev polynomial.
Step8: 1.3.3) Subtract off the model and plot the "background-subtracted image."
Step9: And now you can see that this problem is fairly unrealistic as far as background subtraction goes and should probably be treated with a deblender. Typically in images,
Step10: 2.1) Make a kernel for the PSF.
Step11: 2.2) Correlate the image with the PSF kernel,
Step12: Answer to the question
Step13: The noise estimate is a little high, but not bad for the first iteration. In future interations we will mask the fotoprints detected in the initial round, before recomputing.
Step14: 2.4) Dilate footprint to provide a window or region for the point source.
|
196 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage as ndimage
img = ndimage.imread("noisy.png", flatten = True)
### BEGIN SOLUTION
### END SOLUTION
import scipy.ndimage as ndimage
img = ndimage.imread("noisy.png", flatten = True)
t1 = 30
s1 = 5
a1 = 33
assert a1 == count_cells(img, s1, t1)
img = ndimage.imread("noisy.png", flatten = True)
t2 = 30
s2 = 20
a2 = 21
assert a2 == count_cells(img, s2, t2)
img = ndimage.imread("noisy.png", flatten = True)
t3 = 100
s3 = 5
a3 = 97
assert a3 == count_cells(img, s3, t3)
img = ndimage.imread("noisy.png", flatten = True)
t4 = 100
s4 = 20
a4 = 21
assert a4 == count_cells(img, s4, t4)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part B
|
197 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
predictions[0]
np.argmax(predictions[0])
test_labels[0]
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
np.argmax(predictions_single[0])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 基本分类:对服装图像进行分类
Step2: 导入 Fashion MNIST 数据集
Step3: 加载数据集会返回四个 NumPy 数组:
Step4: 浏览数据
Step5: 同样,训练集中有 60,000 个标签:
Step6: 每个标签都是一个 0 到 9 之间的整数:
Step7: 测试集中有 10,000 个图像。同样,每个图像都由 28x28 个像素表示:
Step8: 测试集包含 10,000 个图像标签:
Step9: 预处理数据
Step10: 将这些值缩小至 0 到 1 之间,然后将其馈送到神经网络模型。为此,请将这些值除以 255。请务必以相同的方式对训练集和测试集进行预处理:
Step11: 为了验证数据的格式是否正确,以及您是否已准备好构建和训练网络,让我们显示训练集中的前 25 个图像,并在每个图像下方显示类名称。
Step12: 构建模型
Step13: 该网络的第一层 tf.keras.layers.Flatten 将图像格式从二维数组(28 x 28 像素)转换成一维数组(28 x 28 = 784 像素)。将该层视为图像中未堆叠的像素行并将其排列起来。该层没有要学习的参数,它只会重新格式化数据。
Step14: 训练模型
Step15: 在模型训练期间,会显示损失和准确率指标。此模型在训练数据上的准确率达到了 0.91(或 91%)左右。
Step16: 结果表明,模型在测试数据集上的准确率略低于训练数据集。训练准确率和测试准确率之间的差距代表过拟合。过拟合是指机器学习模型在新的、以前未曾见过的输入上的表现不如在训练数据上的表现。过拟合的模型会“记住”训练数据集中的噪声和细节,从而对模型在新数据上的表现产生负面影响。有关更多信息,请参阅以下内容:
Step17: 在上例中,模型预测了测试集中每个图像的标签。我们来看看第一个预测结果:
Step18: 预测结果是一个包含 10 个数字的数组。它们代表模型对 10 种不同服装中每种服装的“置信度”。您可以看到哪个标签的置信度值最大:
Step19: 因此,该模型非常确信这个图像是短靴,或 class_names[9]。通过检查测试标签发现这个分类是正确的:
Step20: 您可以将其绘制成图表,看看模型对于全部 10 个类的预测。
Step21: 验证预测结果
Step22: 让我们用模型的预测绘制几张图像。请注意,即使置信度很高,模型也可能出错。
Step23: 使用训练好的模型
Step24: tf.keras 模型经过了优化,可同时对一个批或一组样本进行预测。因此,即便您只使用一个图像,您也需要将其添加到列表中:
Step25: 现在预测这个图像的正确标签:
Step26: keras.Model.predict 会返回一组列表,每个列表对应一批数据中的每个图像。在批次中获取对我们(唯一)图像的预测:
|
198 | <ASSISTANT_TASK:>
Python Code:
import os
import sys
# Modify the path
sys.path.append("..")
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
data = pd.read_csv("data/No-show-Issue-Comma-300k.csv")
data.head()
data.columns = ['Age','Gender','Appointment Registration','Appointment Date',
'Day Of Week','Status','Diabetes','Alcoholism','Hypertension','Handicap',
'Smoker','Scholarship','Tuberculosis','SMS Reminder','Awaiting Time']
data.describe()
features = ['Age','Gender','Appointment Registration','Appointment Date',
'Day Of Week','Diabetes','Alcoholism','Hypertension','Handicap',
'Smoker','Scholarship','Tuberculosis','SMS Reminder','Awaiting Time']
numerical_features = data.describe().columns.values
# Feature Analysis Imports
# NOTE that all these are available for import from the `yellowbrick.features` module
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
# To help interpret the column features being described in the visualization
pd.DataFrame(numerical_features)
# For this visualizer numerical features are required
X = data[numerical_features].as_matrix()
y = data.Status.as_matrix()
# Instantiate the visualizer with the Covariance ranking algorithm
visualizer = Rank2D(features=numerical_features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Instantiate the visualizer with the Pearson ranking algorithm
visualizer = Rank2D(features=numerical_features, algorithm='pearson')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
#Need to specify the classes of interest
classes = data.Status.unique().tolist()
# For this visualizer numerical features are required
X = data[numerical_features].as_matrix()
# Additional step here of converting categorical data 0's and 1's
y = data.Status.replace(classes,[0,1]).as_matrix()
# Instantiate the visualizer
visualizer = visualizer = RadViz(classes=classes, features=numerical_features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Instantiate the visualizer
visualizer = visualizer = ParallelCoordinates(classes=classes, features=numerical_features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ROCAUC, ClassBalance
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Instantiate the classification model and visualizer
logistic = LogisticRegression()
visualizer = ROCAUC(logistic)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Medical Appointment Data
Step2: Feature Analysis
Step3: Rank2D
Step4: Diagnostic Interpretation from Rank2D(Covariance)
Step5: Diagnostic Interpretation from Rank2D(Pearson)
Step6: For regression, the RadViz visualizer should use a color sequence to display the target information, as opposed to discrete colors.
Step7: Classifier Evaluation
Step8: Classification Report
Step9: ROCAUC
Step10: ClassBalance
|
199 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
plt.hist(chinups, histtype = "step", lw = 3)
plt.hist(chinups, bins = 5, histtype="step", lw = 3)
plt.hist(chinups, align = "left", histtype="step", lw = 3)
bins = np.append(np.sort(chinups)[::5], np.max(chinups))
plt.hist(chinups, bins = bins, histtype = "step", normed = True, lw = 3)
plt.hist(chinups, histtype = 'step')
# this is the code for the rug plot
plt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4)
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
grid = np.arange(0 + 1e-4,20,0.01)
PDFtophat = kde_sklearn(chinups, grid, bandwidth = 0.1, kernel = 'tophat')
plt.plot(grid, PDFtophat)
PDFtophat1 = kde_sklearn(chinups, grid, bandwidth = 1, kernel = 'tophat')
plt.plot(grid, PDFtophat1, 'MediumAquaMarine', lw = 3, label = "bw = 1")
PDFtophat5 = kde_sklearn(chinups, grid, bandwidth = 5, kernel = 'tophat')
plt.plot(grid, PDFtophat5, 'Tomato', lw = 3, label = "bw = 5")
plt.legend()
PDFgaussian = kde_sklearn(chinups, grid, bandwidth = 1, kernel = 'gaussian')
plt.plot(grid, PDFgaussian, 'DarkOrange', lw = 3, label = "gaussian")
PDFepanechnikov = kde_sklearn(chinups, grid, bandwidth = 2, kernel = 'epanechnikov')
plt.plot(grid, PDFepanechnikov, 'SlateGrey', lw = 3, label = "epanechnikov")
plt.legend(loc = 2)
x = np.arange(0, 6*np.pi, 0.1)
y = np.cos(x)
plt.plot(x,y, lw = 2)
plt.xlabel('X')
plt.ylabel('Y')
plt.xlim(0, 6*np.pi)
import seaborn as sns
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
sns.set_style("ticks")
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
# default color palette
current_palette = sns.color_palette()
sns.palplot(current_palette)
# set palette to colorblind
sns.set_palette("colorblind")
current_palette = sns.color_palette()
sns.palplot(current_palette)
iris = sns.load_dataset("iris")
iris
# note - hist, kde, and rug all set to True, set to False to turn them off
with sns.axes_style("dark"):
sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True)
plt.scatter(iris['petal_length'], iris['petal_width'])
plt.xlabel("petal length (cm)")
plt.ylabel("petal width (cm)")
with sns.axes_style("darkgrid"):
xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)
yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000)
plt.scatter(xexample, yexample)
# hexbin w/ bins = "log" returns the log of counts/bin
# mincnt = 1 displays only hexpix with at least 1 source present
with sns.axes_style("darkgrid"):
plt.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1)
plt.colorbar()
with sns.axes_style("darkgrid"):
sns.kdeplot(xexample, yexample,shade=False)
sns.jointplot(x=iris['petal_length'], y=iris['petal_width'])
sns.jointplot(x=iris['petal_length'], y=iris['petal_width'], kind = 'kde', shade = 'False')
sns.pairplot(iris[["sepal_length", "sepal_width", "petal_length", "petal_width"]])
sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde')
g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(plt.scatter, edgecolor='white')
g.map_diag(sns.kdeplot, lw=3)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1) Density Estimation
Step2: Problem 1a
Step3: Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers.
Step4: These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups.
Step5: Ending the lie
Step6: Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect.
Step7: Problem 1e
Step8: In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks.
Step9: It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels.
Step10: So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used
Step11: Seaborn
Step12: These plots look identical, but it is possible to change the style with seaborn.
Step13: The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial.
Step14: which we will now change to colorblind, which is clearer to those that are colorblind.
Step15: Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
Step16: Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the distplot function.
Step17: Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, seaborn makes it very easy to produce handy summary plots.
Step18: Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below
Step19: Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above).
Step20: While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function.
Step21: This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set.
Step22: But! Histograms and scatter plots can be problematic as we have discussed many times before.
Step23: That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.)
Step24: For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'.
Step25: Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously.
|