Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
26/May/2016
Step1: What's going on in cells 11 & 14 ? | Python Code:
import functools
def hello_doctor(greet_msg, greet_whom):
return "%s %s, Welcome to the world of robots." %(greet_msg, greet_whom)
## this line works
# hello_doctor = functools.partial(hello_doctor, "R2-D2")
# hello_doctor("Dr.Susan Calvin")
Explanation: 26/May/2016:
** Hey, what's this 'functools' thingy, huh !?!
End of explanation
greet = functools.partial(hello_doctor)
greet("Dr.Susan Calvin")
welcome = functools.partial(hello_doctor)
welcome("Dr.Susan Calvin")
def numpower(base, exponent):
return base ** exponent
def square(base):
return numpower(base, 2)
def cube(base):
return numpower(base, 3)
print square(25)
print cube(15)
Explanation: What's going on in cells 11 & 14 ?
End of explanation |
701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ケモインフォマティクス入門講座初級編
Python基礎 2017年9月30日
講師:久保 竜一(株式会社DeNAライフサイエンス)
WHO ARE YOU??
<img width="309" alt="2017-09-06 16 05 42" src="https
Step1: 破壊的代入
既に代入済みの変数に再代入すると新しい値に置き換わる(破壊的代入)
Step2: 演算子
| 算法 | 記号 |
| ----- | ----- |
| 除算 | / |
| 除算(小数点以下切り捨て)| //|
| 加算 | + |
| 減算 | - |
| 乗算 | * |
| べき乗 | ** |
| 剰余 | % |
除算
Step3: 除算(小数点以下切り捨て)
Step4: 加算
Step5: 減算
Step6: 乗算
Step7: べき乗
Step8: 剰余
Step9: 型
Pythonには複数の型が存在しますが、標準的なものを紹介する
これ以外のものは公式ドキュメントが詳しい
- 数値型|int
- 文字列(テキストシーケンス)型|Str
- シーケンス型|tuple, list
- マッピング型|dict
- 型を調べるには type() を使用する
数値型|int
整数は int()
整数値はそのまま int として扱われる
他にfloat() complex() もある
Step10: 文字列(テキストシーケンス)型|Str
シングルクォート ' もしくはダブルクォート " で囲うと文字列になる
慣例的にシングルクォートを使用することが多い
Step11: シーケンス型|tuple
パーレン () で囲うとタプルとなる
- タプルの中身は変更することができない
- 要素には添字(0始まり!)でアクセス
Step12: シーケンス型|list
ブラケット [] で囲うとリストとなる
- リストの中身は変更可能
- 要素には添字(0始まり!)でアクセス
Step13: listを使いこなす
型に組み込まれた関数 = メソッド
- .append() で要素を追加
- .pop() で要素を後ろから取り出す
Step14: マッピング型|dict
ブレース {} で囲い、key と value を指定すると辞書となる
Step15: dictを使いこなす
.keys() でキーのリストを取得
.values() で値のリストを取得
.items() でキーと値のタプルのリストを取得
Step16: 組み込み関数
組み込み関数はそんなに多くない上によく使うのが print() ぐらい
2. 組み込み関数 — Python 3.5.3 ドキュメント
print()
文字列をコマンドライン上に出力する関数
Step17: ここまでのまとめ
演算子は数学記号とほとんど同じ
変数に値を代入できる
タプル; (1, 2, 3)
リスト; [1, 2, 3]
辞書; {'first'
Step18: while文
指定した条件文が真(True)である限り処理を繰り返す
Step19: for文
指定した回数処理を繰り返す
Step20: PythonでFizzBuzz
1から100の数字を順に出力
3の倍数のときは数字ではなく Fizz と出力
5の倍数のときは数字ではなく Buzz と出力
3と5の公倍数のときは数字ではなく FizzBuzz と出力
Step21: 関数定義
機能単位で関数化してしまうと読みやすく、再利用の際も便利
関数名はスネークケース( _ でつなぐ)を推奨
Step22: 💡TRY!!💡 関数を作ってみる(5 min.)
要件定義
関数の名前は、 is_fizzbuzz()
数値を1つ引数に指定可能
引数で与えられた数値に対してFizzBuzzを判定して判定結果の文字列(Fizz or Buzz or FizzBuzz)を返す
いずれにも当てはまらない値の場合は False を返す
モジュールの使用
Pythonを科学技術計算で使用するメリットのひとつは計算用ライブラリが豊富な点
(Pythonスクリプトではなく) pip コマンドでモジュールをインストール(今回はDocker環境を使っているので実行しない)
Pythonには標準でもたくさんモジュールが入っている
Python 標準ライブラリ — Python 3.6.1 ドキュメント
import ${module_name} でモジュールを読み込む
冒頭でやった import this も同じ
作業ディレクトリのパスを取得する|OSモジュール
osの操作をするためのモジュール
16.1. os — 雑多なオペレーティングシステムインタフェース — Python 3.6.1 ドキュメント
Step23: numpyを使ってみる
行列計算のためのライブラリ
for文などによる繰り返し処理は実行時間がかかるので、行列計算でうまく処理すると大幅な効率改善につながる
Quickstart tutorial — NumPy v1.13 Manual
Step24: 様々な行列
Step25: 行列の計算
Step26: 行列へのアクセス
1次元の場合
Step27: 2次元の場合
Step28: numpy ほんとに速いの?
Step29: 💡TRY!!💡Jupyter notebook上で関数のヘルプを見る
np.zeros() って言う知らない関数が出てきたんだけど
新しいセルで ?np.zeros() を実行してみる
さいごに
わからないことはGoogleで検索!!
スモールステップで動作確認
ヘルプを活用!!
💡TRY!!💡 more
もっとPythonで楽しみたい方はトライしてみてください
リスト内包表記で書き換えて高速化
入力したリストの数値をそれぞれ2乗して返してくれる関数 square() を定義しました。
この関数を リスト内包表記 でもっと高速に処理するよう書き換えてみてください。
→ リスト内包表記はネット検索!!
まずはセルをコピーして、実行方法は変えずに関数の中身だけ変えてみてください | Python Code:
a = 1
a
Explanation: ケモインフォマティクス入門講座初級編
Python基礎 2017年9月30日
講師:久保 竜一(株式会社DeNAライフサイエンス)
WHO ARE YOU??
<img width="309" alt="2017-09-06 16 05 42" src="https://user-images.githubusercontent.com/7918702/30098549-5013838e-931d-11e7-8b9c-01dd1a30ccfd.png">
一般的には
@kubor_
ソーシャル創薬美少女
いろんなイベントやってます
Genome Biz Meetup 2017
シェル芸勉強会 meets バイオインフォマティクス vol.1
会社では
株式会社DeNAライフサイエンス
MYCODE
ゲノムデータ解析をする人
バイオインフォマティクスエンジニアとしてSNPアレイデータ、NGSデータを始めとするゲノムデータの解析
ケモインフォマティクスの世界では
RDKit日本ユーザー会を主催
rdkit-users.jp
創薬ちゃんの化合物描画機能の実装
創薬ちゃん on Twitter
創薬ちゃんの中の人ではないし、中の人などいません
Pythonのすごい人なの?
そんなことないです!
Python歴は3年弱
それまではPerlとRを1年
そのまた前は植物育種をしていました
今日やること
Pythonの紹介
Jupyterノートブックの使い方
Pythonの基本文法の確認
Numpy, Pandasの使い方
Pythonというプログラミング言語について
1991年、グイド・ヴァンロッサムによって作られた、動的型付け言語
書きやすさと読みやすさから主に海外で人気が高く、Googleでは社内のメイン開発言語として採用されていることが有名
最近ではディープラーニングに用いられることが多い
ヴァンロッサムのウェブサイト https://gvanrossum.github.io/
Pythonの採用例
Pythonが使われているいろいろなもの
Dropbox(クラウドストレージサービス)
Tensorflow(深層学習フレームワーク)
Instagram(写真投稿SNS)
Pythonの元ネタ
Pythonの元ネタはイギリスのコメディグループ「モンティ・パイソン」。
「死んだオウム」
本講座ではPython3を使います!
Pythonのメジャーバージョンはいま2と3が混在している
最新の3を使えば問題はないが、一部、2系でしか使えないモジュールがあるため、目的に応じて2系を使う必要のある場面もある
例えば、バイオ系だがChIP-seq用の解析ソフトであるMACSは2系にしか対応していない
taoliu/MACS: MACS -- Model-based Analysis of ChIP-Seq
Python基礎ハンズオン
ここからPythonの基礎文法などをハンズオンで駆け抜けていきます
💡受講のポイント💡
Jupyter notebook の 使い方をはじめに解説しますので良く確認しておいてください
わからないことは積極的にTA・講師に聞いてください
#chemo_wakate タグを付けてのSNS発信を推奨します!!(特にTwitter)
Google検索の積極的活用を推奨します!!コピペOK!!楽しましょう!!
TA・講師の言っていることが必ずしも正しいとは限りません
基本的には解説とサンプルコードがセットになっているのでセルを選択して実行(<kbd>Ctrl</kbd> + <kbd>Enter</kbd>)していくだけで流れを追うことができます
Jupyter notebook を起動しよう
git clone した chemo-wakate/tutorial-6th ディレクトリで以下のコマンドを実行して起動
-v で、起動するDockerコンテナにホストマシンのディレクトリをマウントすることができます。
-v <font color="#ff0000">マウントしたいホストマシンのディレクトリ</font>:<font color="#0000ff">Dockerコンテナ内のマウントしたい位置</font>
💡TRY!!💡
docker run --rm -it -p 8888:8888 \
-v <font color="#ff0000">~/chemo-wakate/tutorial-6th/</font>:<font color="#0000ff">/home/jovyan/chemo</font> \
chemo-wakate/tutorial-6th
ヘルプを確認してみよう
💡TRY!!💡
<kbd>h</kbd> を入力して、ヘルプメッセージを確認しましょう。
うまく表示されない場合は <kbd>Esc</kbd>を入力してから再度<kbd>h</kbd> を入力してみてください。
重要な操作方法のまとめ
<kbd>h</kbd>: ヘルプを表示する
<kbd>Esc</kbd>: コマンドモードに移行する(セルの枠が青)
<kbd>Enter</kbd>: 編集モードに移行する(セルの枠が緑)
コマンドモードで、<kbd>a</kbd>: ひとつ上に空のセルを挿入
コマンドモードで、<kbd>b</kbd>: ひとつ下に空のセルを挿入
コマンドモードで、<kbd>j</kbd> or <kbd>k</kbd>: セルを上下に移動
コマンドモードで、<kbd>d</kbd><kbd>d</kbd>: セルを削除
<kbd>Ctrl</kbd>+<kbd>Enter</kbd>: セルの内容を実行
Zen of Python
Pythonの禅を知る
Pythonを書く覚悟を決めるべし
💡TRY!!💡
新しいセルを作成して、 import this と入力して実行しよう
変数
動的型付け
変数名はスネークケース(snake_case)
宣言不要
破壊的に代入される
代入
= 演算子で変数に値を代入
= の前後には半角スペースを入れる
End of explanation
a = 1
a = 5
a
Explanation: 破壊的代入
既に代入済みの変数に再代入すると新しい値に置き換わる(破壊的代入)
End of explanation
2296 / 3
Explanation: 演算子
| 算法 | 記号 |
| ----- | ----- |
| 除算 | / |
| 除算(小数点以下切り捨て)| //|
| 加算 | + |
| 減算 | - |
| 乗算 | * |
| べき乗 | ** |
| 剰余 | % |
除算
End of explanation
2296 // 3
Explanation: 除算(小数点以下切り捨て)
End of explanation
123 + 223
Explanation: 加算
End of explanation
33 - 4
Explanation: 減算
End of explanation
2 * 8
Explanation: 乗算
End of explanation
2 ** 8
Explanation: べき乗
End of explanation
14444 % 32
Explanation: 剰余
End of explanation
## 数値
type(123)
Explanation: 型
Pythonには複数の型が存在しますが、標準的なものを紹介する
これ以外のものは公式ドキュメントが詳しい
- 数値型|int
- 文字列(テキストシーケンス)型|Str
- シーケンス型|tuple, list
- マッピング型|dict
- 型を調べるには type() を使用する
数値型|int
整数は int()
整数値はそのまま int として扱われる
他にfloat() complex() もある
End of explanation
## 文字列
type('123')
Explanation: 文字列(テキストシーケンス)型|Str
シングルクォート ' もしくはダブルクォート " で囲うと文字列になる
慣例的にシングルクォートを使用することが多い
End of explanation
nums = (1, 2, 3)
nums
nums[0]
## 0番目の要素を変更してみるけどエラーになる
nums[0] = 5
Explanation: シーケンス型|tuple
パーレン () で囲うとタプルとなる
- タプルの中身は変更することができない
- 要素には添字(0始まり!)でアクセス
End of explanation
elements = ['H', 'C', 'O']
elements
elements[0]
## 中身を変更
elements[0] = 'N'
elements
Explanation: シーケンス型|list
ブラケット [] で囲うとリストとなる
- リストの中身は変更可能
- 要素には添字(0始まり!)でアクセス
End of explanation
elements.append('Na')
elements
elements.pop()
elements
Explanation: listを使いこなす
型に組み込まれた関数 = メソッド
- .append() で要素を追加
- .pop() で要素を後ろから取り出す
End of explanation
element_d = {'H': 1, 'C': 15, 'O': 16}
element_d
## key(キー)でvalue(値)にアクセス
element_d['C']
## 値を変更
element_d['C'] = 12
element_d['C']
## 存在しないキーを指定して代入すると項目を追加できる
element_d['N'] = 14
element_d
## キーにタプルを使用可能
element_d[('C', 'C')] = 24
element_d
## キーにリストは使えない(リストは値の中身を入れ替えられるので確実にキーと値を紐付けられず安全じゃない)
element_d[['C', 'C']] = 24
element_d
Explanation: マッピング型|dict
ブレース {} で囲い、key と value を指定すると辞書となる
End of explanation
element_d.keys()
element_d.values()
element_d.items()
Explanation: dictを使いこなす
.keys() でキーのリストを取得
.values() で値のリストを取得
.items() でキーと値のタプルのリストを取得
End of explanation
print('Hello World!!')
## print関数の中で演算もできる
greeting = 'Hello World!!'
print(greeting*1000)
Explanation: 組み込み関数
組み込み関数はそんなに多くない上によく使うのが print() ぐらい
2. 組み込み関数 — Python 3.5.3 ドキュメント
print()
文字列をコマンドライン上に出力する関数
End of explanation
v = 100
if v > 10:
print(v, 'is greater than 10')
v = 5
if v > 10:
print(v, 'is greater than 10')
else:
print(v, 'is less than 10')
v = 10
if v == 10:
print(v, 'is equal to 10')
elif v > 10:
print(v, 'is greater than 10')
else:
print(v, 'is less than 10')
Explanation: ここまでのまとめ
演算子は数学記号とほとんど同じ
変数に値を代入できる
タプル; (1, 2, 3)
リスト; [1, 2, 3]
辞書; {'first': 1, 'second': 2, 'third': 3}
print()
様々なプログラムの制御方法
if : 条件分岐
while : 繰り返し処理
for : 繰り返し処理
if文
Perlなどと異なり、if文の節が {} に囲まれていないことに注目
そのかわり、インデントを揃えることでif文のブロック(インデントブロック)を明示している
Pythonにおいてインデントブロックは、読みやすくするだけではなく、プログラミング上で意味のあるものとなっていることに注目
インデントの方法は様々だがPythonにおいては 半角スペース4文字 が推奨されている
Jupyter notebookでは<kbd>TAB</kbd>によって半角スペース4つが挿入される
<kbd>Shift</kbd>+<kbd>TAB</kbd>でde-indent
End of explanation
## 100以下のフィボナッチ数列を計算してみる
a, b = 0, 1
while b < 100:
print(b)
a, b = b, a+b
Explanation: while文
指定した条件文が真(True)である限り処理を繰り返す
End of explanation
## リストに保存された文字列の長さを出力してみる
## - len() は要素の数を返す関数
## - 文字列型は一文字が一つの要素として扱われるので、文字数が返ってくる
words = ['cat', 'window', 'defenestrate']
for word in words:
print(word, len(word))
## リストに保存された文字列の長さを出力してみる(index番号もほしい!)
## - len() は要素の数を返す関数
## - 文字列型は一文字が一つの要素として扱われるので、文字数が返ってくる
## - enumerate()を使うと添字も取得できます
words = ['cat', 'window', 'defenestrate']
for index, word in enumerate(words):
print(index, word, len(word))
## PythonではC言語風の書き方は存在しないのでこの処理をn回繰り返すといったときはこう書く
for i in range(5):
print(i)
## 途中で処理を止める | break
for i in range(10):
print(i)
if i >= 5:
break
## (ループの先頭に戻って)処理を続ける | continue
for i in range(20):
print(i)
if i >= 5:
continue
print('\n5未満だけ実行される\n')
Explanation: for文
指定した回数処理を繰り返す
End of explanation
for i in range(1, 101):
if i % 15 == 0:
print("FizzBuzz")
elif i % 3 == 0:
print("Fizz")
elif i % 5 == 0:
print("Buzz")
else:
print(i)
Explanation: PythonでFizzBuzz
1から100の数字を順に出力
3の倍数のときは数字ではなく Fizz と出力
5の倍数のときは数字ではなく Buzz と出力
3と5の公倍数のときは数字ではなく FizzBuzz と出力
End of explanation
## 足し算をする関数
def add_num(a, b):
return a + b
# 実行してみる
add_num(32, 40)
Explanation: 関数定義
機能単位で関数化してしまうと読みやすく、再利用の際も便利
関数名はスネークケース( _ でつなぐ)を推奨
End of explanation
import os
os.getcwd()
Explanation: 💡TRY!!💡 関数を作ってみる(5 min.)
要件定義
関数の名前は、 is_fizzbuzz()
数値を1つ引数に指定可能
引数で与えられた数値に対してFizzBuzzを判定して判定結果の文字列(Fizz or Buzz or FizzBuzz)を返す
いずれにも当てはまらない値の場合は False を返す
モジュールの使用
Pythonを科学技術計算で使用するメリットのひとつは計算用ライブラリが豊富な点
(Pythonスクリプトではなく) pip コマンドでモジュールをインストール(今回はDocker環境を使っているので実行しない)
Pythonには標準でもたくさんモジュールが入っている
Python 標準ライブラリ — Python 3.6.1 ドキュメント
import ${module_name} でモジュールを読み込む
冒頭でやった import this も同じ
作業ディレクトリのパスを取得する|OSモジュール
osの操作をするためのモジュール
16.1. os — 雑多なオペレーティングシステムインタフェース — Python 3.6.1 ドキュメント
End of explanation
import numpy as np # 慣例的に as np として省略した名前でimportすることが多い
a = np.array([1, 2, 3]) # numpyアレイを作成
a
a.dtype
b = np.array([1.2, 3.5, 5.1])
b
b.dtype
Explanation: numpyを使ってみる
行列計算のためのライブラリ
for文などによる繰り返し処理は実行時間がかかるので、行列計算でうまく処理すると大幅な効率改善につながる
Quickstart tutorial — NumPy v1.13 Manual
End of explanation
np.arange(6) # 1次元
np.arange(12).reshape(4, 3) # 2次元
np.arange(24).reshape(2, 3, 4) # 3次元
np.random.rand(10, 10) # 10x10 の乱数行列
Explanation: 様々な行列
End of explanation
## 引き算
a = np.array([20, 30, 40, 50])
b = np.arange(4)
a - b
a * b # 掛け算
Explanation: 行列の計算
End of explanation
a = np.random.rand(20)
a
# 2番目の要素を取り出す
a[1]
# 4番目から8番目の要素を取り出す
a[4:9]
# 逆順で取り出す
a[::-1]
Explanation: 行列へのアクセス
1次元の場合
End of explanation
a = np.random.rand(10, 3)
a
a[0][0] # 1番目の配列の1番目の要素
a[0, 0] # 1番目の配列の1番目の要素(書き方違い)
a[0:5, 0] # 1番目から5番目の配列の1番目の要素
Explanation: 2次元の場合
End of explanation
def prod_2d(n=100):
a = np.random.rand(n, n)
b = np.random.rand(n, n)
c = np.zeros((n, n)) # zero 行列
for i in range(n):
for j in range(n):
for k in range(n):
c[i][j] = a[i][k] * b[k][j]
return c
def prod_2d_np(n=100):
a = np.random.rand(n, n)
b = np.random.rand(n, n)
return np.dot(a, b)
# numpy使わずに計算した場合
%timeit result = prod_2d(n=150)
# numpyを使って計算した場合
%timeit result = prod_2d_np(n=150)
Explanation: numpy ほんとに速いの?
End of explanation
import numpy as np # 乱数の生成に使用
np.random.seed(123) # 毎回同じ乱数が生成されるようにシード値を固定
def square(nums):
results = []
for num in nums:
results.append(num ** 2)
return results
%timeit square(np.random.rand(1000000))
Explanation: 💡TRY!!💡Jupyter notebook上で関数のヘルプを見る
np.zeros() って言う知らない関数が出てきたんだけど
新しいセルで ?np.zeros() を実行してみる
さいごに
わからないことはGoogleで検索!!
スモールステップで動作確認
ヘルプを活用!!
💡TRY!!💡 more
もっとPythonで楽しみたい方はトライしてみてください
リスト内包表記で書き換えて高速化
入力したリストの数値をそれぞれ2乗して返してくれる関数 square() を定義しました。
この関数を リスト内包表記 でもっと高速に処理するよう書き換えてみてください。
→ リスト内包表記はネット検索!!
まずはセルをコピーして、実行方法は変えずに関数の中身だけ変えてみてください
End of explanation |
702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='#A52A2A'>Enhancing ZOS API using PyZOS library</font>
Step1: <font color='#008000'>Reference</font>
We will use the following two Zemax knowledgebase articles as base material for this discussion. Especially, the code from the first article is used to compare and illustrate the enhanced features of PyZOS library.
"How to build and optimize a singlet using ZOS-API with Python," Zemax KB, 12/16/2015, Thomas Aumeyr.
"Interfacing to OpticStudio from Mathematica," Zemax KB, 05/03/2015, David.
Step2: <font color='#008000'>Enhancements and capabilities provided by PyZOS</font>
Visible user-interface for the headless standalone ZOS-API COM application
Single-step initialization of ZOS API Interface and instantiation of an Optical System
Introspection of properties of ZOS objects on Tab press
Ability to override methods of existing ZOS objects
Tab completion and introspection of API constants
Ability to add custom methods and properties to any ZOS-API object
<font color='#005078'>1. Visible user-interface for the headless standalone ZOS-API COM application</font>
The PyZOS library provides three functions (instance methods of OpticalSystem) for the sync-with-ui mechanism.
The sync-with-ui mechanism may be turned on during the instantiation of the OpticalSystem object using the parameter sync_ui set to True (see the screen-shot video below) or using the method zSyncWithUI() of the OpticalSystem instance at other times.
The other two functions
zPushLens() -- for copying a lens from the headless ZOS COM server (invisible) to a running user-interface application (visible)
zGetRefresh()-- for copying a lens from the running user-interface application (visible) to the headless ZOS COM server (invisible).
enable the user to interact with a running OpticStudio user-interface and observe the changes made through the API instantly in the user-interface.
(If you cannot see the video in the frame below, or if the resolution is not good enough within this notebook, here is a direct link to the Youtube video
Step3: <font color='#005078'>2. Single-step initialization of ZOS API Interface and instantiation of an Optical System</font>
The first enhancement is the complete elimination of boiler-plate code to get create and initialize an optical system object and get started. We just need to create an instance of a PyZOS OpticalSystem to get started. The optical system can be either a sequential or a non-sequential system.
Step4: It may seem that if we are using PyZOS, then the application is not available. In fact, it is available through a property of the PyZOS OpticalSystem object
Step5: <font color='#005078'>3. Introspection of properties of ZOS objects on <kbd>Tab</kbd> press</font>
Because of the way property attributes are mapped by the PyWin32 library, the properties are not introspectable (visible) on <kbd>Tab</kbd> press in smart editors such as IPython. Only the methods are shown upon pressing the <kbd>Tab</kbd> button (see figure below). Note that although the properties are not "visible" they are still accessible.
PyZOS enhances the user experience by showing both the method and the properties of the ZOS object (by creating a wrapped object and delegating the attribute access to the underlying ZOS API COM objects). In addition, the properties are easily identified by the prefix <font color='magenta'><b>p</b></font> in front of the property attribute names.
Step6: Note that the above enhancement doesn't come at the cost of code-breaks. If you have already written applications that interfaced directly using pywin32, i.e. the code accessed properties as CoatingDir, SamplesDir, etc., the application should run even with PyZOS library, as shown below
Step7: <font color='#005078'>4. Ability to override methods of existing ZOS objects</font>
There are some reasons why we may want to override some of the methods provided by ZOS. As an example, consider the SaveAs method of OpticalSystem that accepts a filename. If the filename is invalid the SaveAs method doesn't raise any exception/error, instead the lens data is saved to the default file "Lens.zmx".
This function in PyZOS has been overridden to raise an exception if the directory path is not valid, as shown below
Step8: <font color='#005078'>5. <kbd>Tab</kbd> completion and introspection of API constants</font>
The ZOS API provides a large set of enumerations that are accessible as constants through the constants object of PyWin32. However, they are not introspectable using the <kbd>Tab</kbd> key. PyZOS automatically retrieves the constants and makes them introspectable as shown below
Step9: <font color='#005078'>6. Ability to add custom methods and properties to any ZOS API object</font>
PyZOS allows us to easily extend the functionality of any ZOS API object by adding custom methods and properties, supporting the idea of developing a useful library over time. In the following block of codes we have added custom methods zInsertNewSurfaceAt() and zSetSurfaceData() to the OpticalSystem ojbect.
(Please note there is <u>no</u> implication that one cannot build a common set of functions without using PyZOS. Here, we only show that PyZOS allows us to add methods to the ZOS objects. How to add new methods PyZOS objects will be explained later.)
Step10: The custom functions are introspectable, and they are identified by the prefix <font color='magenta'><b>z</b></font> to their names as shown in the following figure.
Step11: Here we can demonstrate another strong reason why we may require to add methods to ZOS objects when using the ZOS API with pywin32 library. The problem is illustrated in the figure below. According to the ZOS API manual, the MFE object (IMeritFunctionEditor) should have the methods AddRow(), DeleteAllRows(), DeleteRowAt(), DeleteRowsAt(), InsertRowAt() and GetRowAt() that it inherits from IEditor object. However, due to the way pywin32 handles inheritence, these methods (defined in the base class) are apparently not available to the derived class object [1].
Step12: In order to solve the above problem in PyZOS, currently we "add" these methods to the derived (and wrapped) objects and delegate the calls to the base class. (Probably there is a more intelligent method of solving this ... which will not require so much code re-writing!)
Step13: <font color='#008000'>Create a second optical system to load a standard lens for FFT MTF analysis</font>
Step14: Since these objects has not been wrapped (at the time of this writing), we will wrap them first | Python Code:
from __future__ import print_function
import os
import sys
import numpy as np
from IPython.display import display, Image, YouTubeVideo
import matplotlib.pyplot as plt
# Imports for using ZOS API in Python directly with pywin32
# (not required if using PyZOS)
from win32com.client.gencache import EnsureDispatch, EnsureModule
from win32com.client import CastTo, constants
# Import for using ZOS API in Python using PyZOS
import pyzos.zos as zos
Explanation: <font color='#A52A2A'>Enhancing ZOS API using PyZOS library</font>
End of explanation
# Set this variable to True or False to use ZOS API with
# the PyZOS library or without it respectively.
USE_PYZOS = True
Explanation: <font color='#008000'>Reference</font>
We will use the following two Zemax knowledgebase articles as base material for this discussion. Especially, the code from the first article is used to compare and illustrate the enhanced features of PyZOS library.
"How to build and optimize a singlet using ZOS-API with Python," Zemax KB, 12/16/2015, Thomas Aumeyr.
"Interfacing to OpticStudio from Mathematica," Zemax KB, 05/03/2015, David.
End of explanation
# a screenshot video of the feature:
display(YouTubeVideo("ot5CrjMXc_w?t", width=900, height=600))
Explanation: <font color='#008000'>Enhancements and capabilities provided by PyZOS</font>
Visible user-interface for the headless standalone ZOS-API COM application
Single-step initialization of ZOS API Interface and instantiation of an Optical System
Introspection of properties of ZOS objects on Tab press
Ability to override methods of existing ZOS objects
Tab completion and introspection of API constants
Ability to add custom methods and properties to any ZOS-API object
<font color='#005078'>1. Visible user-interface for the headless standalone ZOS-API COM application</font>
The PyZOS library provides three functions (instance methods of OpticalSystem) for the sync-with-ui mechanism.
The sync-with-ui mechanism may be turned on during the instantiation of the OpticalSystem object using the parameter sync_ui set to True (see the screen-shot video below) or using the method zSyncWithUI() of the OpticalSystem instance at other times.
The other two functions
zPushLens() -- for copying a lens from the headless ZOS COM server (invisible) to a running user-interface application (visible)
zGetRefresh()-- for copying a lens from the running user-interface application (visible) to the headless ZOS COM server (invisible).
enable the user to interact with a running OpticStudio user-interface and observe the changes made through the API instantly in the user-interface.
(If you cannot see the video in the frame below, or if the resolution is not good enough within this notebook, here is a direct link to the Youtube video: https://www.youtube.com/watch?v=ot5CrjMXc_w&feature=youtu.be)
End of explanation
if USE_PYZOS:
osys = zos.OpticalSystem() # Directly get the Primary Optical system
else:
# using ZOS API directly with pywin32
EnsureModule('ZOSAPI_Interfaces', 0, 1, 0)
connect = EnsureDispatch('ZOSAPI.ZOSAPI_Connection')
app = connect.CreateNewApplication() # The Application
osys = app.PrimarySystem # Optical system (primary)
# common
osys.New(False)
Explanation: <font color='#005078'>2. Single-step initialization of ZOS API Interface and instantiation of an Optical System</font>
The first enhancement is the complete elimination of boiler-plate code to get create and initialize an optical system object and get started. We just need to create an instance of a PyZOS OpticalSystem to get started. The optical system can be either a sequential or a non-sequential system.
End of explanation
if USE_PYZOS:
print(osys.pTheApplication)
Explanation: It may seem that if we are using PyZOS, then the application is not available. In fact, it is available through a property of the PyZOS OpticalSystem object:
End of explanation
Image('./images/00_01_property_attribute.png')
if USE_PYZOS:
sdir = osys.pTheApplication.pSamplesDir
else:
sdir = osys.TheApplication.SamplesDir
sdir
Explanation: <font color='#005078'>3. Introspection of properties of ZOS objects on <kbd>Tab</kbd> press</font>
Because of the way property attributes are mapped by the PyWin32 library, the properties are not introspectable (visible) on <kbd>Tab</kbd> press in smart editors such as IPython. Only the methods are shown upon pressing the <kbd>Tab</kbd> button (see figure below). Note that although the properties are not "visible" they are still accessible.
PyZOS enhances the user experience by showing both the method and the properties of the ZOS object (by creating a wrapped object and delegating the attribute access to the underlying ZOS API COM objects). In addition, the properties are easily identified by the prefix <font color='magenta'><b>p</b></font> in front of the property attribute names.
End of explanation
osys.TheApplication.SamplesDir # note that the properties doesn't have 'p' prefix in the names
Explanation: Note that the above enhancement doesn't come at the cost of code-breaks. If you have already written applications that interfaced directly using pywin32, i.e. the code accessed properties as CoatingDir, SamplesDir, etc., the application should run even with PyZOS library, as shown below:
End of explanation
file_out = os.path.join(sdir, 'invalid_directory',
'Single Lens Example wizard+EFFL.zmx')
try:
osys.SaveAs(file_out)
except Exception as err:
print(repr(err))
file_out = os.path.join(sdir, 'Sequential', 'Objectives',
'Single Lens Example wizard+EFFL.zmx')
osys.SaveAs(file_out)
# Aperture
if USE_PYZOS:
sdata = osys.pSystemData
sdata.pAperture.pApertureValue = 40
else:
sdata = osys.SystemData
sdata.Aperture.ApertureValue = 40
# Set Field data
if USE_PYZOS:
field = sdata.pFields.AddField(0, 5.0, 1.0)
print('Number of fields =', sdata.pFields.pNumberOfFields)
else:
field = sdata.Fields.AddField(0, 5.0, 1.0)
print('Number of fields =', sdata.Fields.NumberOfFields)
Explanation: <font color='#005078'>4. Ability to override methods of existing ZOS objects</font>
There are some reasons why we may want to override some of the methods provided by ZOS. As an example, consider the SaveAs method of OpticalSystem that accepts a filename. If the filename is invalid the SaveAs method doesn't raise any exception/error, instead the lens data is saved to the default file "Lens.zmx".
This function in PyZOS has been overridden to raise an exception if the directory path is not valid, as shown below:
End of explanation
Image('./images/00_02_constants.png')
# Setting wavelength using wavelength preset
if USE_PYZOS:
sdata.pWavelengths.SelectWavelengthPreset(zos.Const.WavelengthPreset_d_0p587);
else:
sdata.Wavelengths.SelectWavelengthPreset(constants.WavelengthPreset_d_0p587);
Explanation: <font color='#005078'>5. <kbd>Tab</kbd> completion and introspection of API constants</font>
The ZOS API provides a large set of enumerations that are accessible as constants through the constants object of PyWin32. However, they are not introspectable using the <kbd>Tab</kbd> key. PyZOS automatically retrieves the constants and makes them introspectable as shown below:
End of explanation
# Set Lens data Editor
if USE_PYZOS:
osys.zInsertNewSurfaceAt(1)
osys.zInsertNewSurfaceAt(1)
osys.zSetSurfaceData(1, thick=10, material='N-BK7', comment='front of lens')
osys.zSetSurfaceData(2, thick=50, comment='rear of lens')
osys.zSetSurfaceData(3, thick=350, comment='Stop is free to move')
else:
lde = osys.LDE
lde.InsertNewSurfaceAt(1)
lde.InsertNewSurfaceAt(1)
surf1 = lde.GetSurfaceAt(1)
surf2 = lde.GetSurfaceAt(2)
surf3 = lde.GetSurfaceAt(3)
surf1.Thickness = 10.0
surf1.Comment = 'front of lens'
surf1.Material = 'N-BK7'
surf2.Thickness = 50.0
surf2.Comment = 'rear of lens'
surf3.Thickness = 350.0
surf3.Comment = 'Stop is free to move'
Explanation: <font color='#005078'>6. Ability to add custom methods and properties to any ZOS API object</font>
PyZOS allows us to easily extend the functionality of any ZOS API object by adding custom methods and properties, supporting the idea of developing a useful library over time. In the following block of codes we have added custom methods zInsertNewSurfaceAt() and zSetSurfaceData() to the OpticalSystem ojbect.
(Please note there is <u>no</u> implication that one cannot build a common set of functions without using PyZOS. Here, we only show that PyZOS allows us to add methods to the ZOS objects. How to add new methods PyZOS objects will be explained later.)
End of explanation
Image('./images/00_03_extendiblity_custom_functions.png')
# Setting solves - Make thickness and radii variable
# nothing to demonstrate in particular in this block of code
if USE_PYZOS:
osys.pLDE.GetSurfaceAt(1).pRadiusCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(1).pThicknessCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(2).pRadiusCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(2).pThicknessCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(3).pThicknessCell.MakeSolveVariable()
else:
surf1.RadiusCell.MakeSolveVariable()
surf1.ThicknessCell.MakeSolveVariable()
surf2.RadiusCell.MakeSolveVariable()
surf2.ThicknessCell.MakeSolveVariable()
surf3.ThicknessCell.MakeSolveVariable()
# Setting up the default merit function
# this code block again shows that we can create add custom methods
# based on our requirements
if USE_PYZOS:
osys.zSetDefaultMeritFunctionSEQ(ofType=0, ofData=1, ofRef=0, rings=2, arms=0, grid=0,
useGlass=True, glassMin=3, glassMax=15, glassEdge=3,
useAir=True, airMin=0.5, airMax=1000, airEdge=0.5)
else:
mfe = osys.MFE
wizard = mfe.SEQOptimizationWizard
wizard.Type = 0 # RMS
wizard.Data = 1 # Spot Radius
wizard.Reference = 0 # Centroid
wizard.Ring = 2 # 3 Rings
wizard.Arm = 0 # 6 Arms
wizard.IsGlassUsed = True
wizard.GlassMin = 3
wizard.GlassMax = 15
wizard.GlassEdge = 3
wizard.IsAirUsed = True
wizard.AirMin = 0.5
wizard.AirMax = 1000
wizard.AirEdge = 0.5
wizard.IsAssumeAxialSymmetryUsed = True
wizard.CommonSettings.OK()
Explanation: The custom functions are introspectable, and they are identified by the prefix <font color='magenta'><b>z</b></font> to their names as shown in the following figure.
End of explanation
Image('./images/00_04_extendiblity_required_methods.png')
Explanation: Here we can demonstrate another strong reason why we may require to add methods to ZOS objects when using the ZOS API with pywin32 library. The problem is illustrated in the figure below. According to the ZOS API manual, the MFE object (IMeritFunctionEditor) should have the methods AddRow(), DeleteAllRows(), DeleteRowAt(), DeleteRowsAt(), InsertRowAt() and GetRowAt() that it inherits from IEditor object. However, due to the way pywin32 handles inheritence, these methods (defined in the base class) are apparently not available to the derived class object [1].
End of explanation
# Add operand
if USE_PYZOS:
mfe = osys.pMFE
operand1 = mfe.InsertNewOperandAt(1)
operand1.ChangeType(zos.Const.MeritOperandType_EFFL)
operand1.pTarget = 400.0
operand1.pWeight = 1.0
else:
operand1 = mfe.InsertNewOperandAt(1)
operand1.ChangeType(constants.MeritOperandType_EFFL)
operand1.Target = 400.0
operand1.Weight = 1.0
# Local optimization
if USE_PYZOS:
local_opt = osys.pTools.OpenLocalOptimization()
local_opt.pAlgorithm = zos.Const.OptimizationAlgorithm_DampedLeastSquares
local_opt.pCycles = zos.Const.OptimizationCycles_Automatic
local_opt.pNumberOfCores = 8
local_opt.RunAndWaitForCompletion()
local_opt.Close()
else:
local_opt = osys.Tools.OpenLocalOptimization()
local_opt.Algorithm = constants.OptimizationAlgorithm_DampedLeastSquares
local_opt.Cycles = constants.OptimizationCycles_Automatic
local_opt.NumberOfCores = 8
base_tool = CastTo(local_opt, 'ISystemTool')
base_tool.RunAndWaitForCompletion()
base_tool.Close()
# save the latest changes to the file
osys.Save()
Explanation: In order to solve the above problem in PyZOS, currently we "add" these methods to the derived (and wrapped) objects and delegate the calls to the base class. (Probably there is a more intelligent method of solving this ... which will not require so much code re-writing!)
End of explanation
%matplotlib inline
osys2 = zos.OpticalSystem()
# load a lens into the Optical System
lens = 'Cooke 40 degree field.zmx'
zfile = os.path.join(sdir, 'Sequential', 'Objectives', lens)
osys2.LoadFile(zfile, False)
osys2.pSystemName
# check the aperture
osys2.pSystemData.pAperture.pApertureValue
# a more detailed information about the pupil
osys2.pLDE.GetPupil()
# Thickness of a surface
surf6 = osys2.pLDE.GetSurfaceAt(6)
surf6.pThickness
# Thickness of surface through custom added method
osys2.zGetSurfaceData(6).thick
# Open Analysis windows in the system currently
num_analyses = osys2.pAnalyses.pNumberOfAnalyses
for i in range(num_analyses):
print(osys2.pAnalyses.Get_AnalysisAtIndex(i+1).pGetAnalysisName)
#mtf analysis
fftMtf = osys2.pAnalyses.New_FftMtf() # open a new FFT MTF window
fftMtf
fftMtf_settings = fftMtf.GetSettings()
fftMtf_settings
# Set the maximum frequency to 160 lp/mm
fftMtf_settings.pMaximumFrequency = 160.0
# run the analysis
fftMtf.ApplyAndWaitForCompletion()
# results
fftMtf_results = fftMtf.GetResults() # returns an <pyzos.zosutils.IAR_ object
# info about the result data
print('Number of data grids:', fftMtf_results.pNumberOfDataGrids)
print('Number of data series:', fftMtf_results.pNumberOfDataSeries)
ds = fftMtf_results.GetDataSeries(1)
ds.pDescription
ds.pNumSeries
ds.pSeriesLabels
Explanation: <font color='#008000'>Create a second optical system to load a standard lens for FFT MTF analysis</font>
End of explanation
dsXdata = ds.pXData
dsYdata = ds.pYData
freq = np.array(dsXdata.pData)
mtf = np.array(dsYdata.pData) # shape = (len(freq) , ds.pNumSeries)
# build a function to plot the FFTMTF
def plot_FftMtf(optical_system, max_freq=160.0):
fftMtf = optical_system.pAnalyses.New_FftMtf()
fftMtf.GetSettings().pMaximumFrequency = max_freq
fftMtf.ApplyAndWaitForCompletion()
fftMtf_results = fftMtf.GetResults()
fig, ax = plt.subplots(1,1, figsize=(8,6))
num_dataseries = fftMtf_results.pNumberOfDataSeries
col = ['#0080FF', '#F52080', '#00CC60', '#B96F20', '#1f77b4',
'#ff7f0e', '#2ca02c', '#8c564b', '#00BFFF', '#FF8073']
for i in range(num_dataseries):
ds = fftMtf_results.GetDataSeries(i)
dsXdata = ds.pXData
dsYdata = ds.pYData
freq = np.array(dsXdata.pData)
mtf = np.array(dsYdata.pData) # shape = (len(freq) , ds.pNumSeries)
ax.plot(freq[::5], mtf[::5, 0], color=col[i], lw=1.5, label=ds.pDescription) # tangential
ax.plot(freq[::5], mtf[::5, 1], '--', color=col[i], lw=2) # sagittal
ax.set_xlabel('Spatial Frequency (lp/mm)')
ax.set_ylabel('FFT MTF')
ax.legend()
plt.text(0.85, -0.1,u'\u2014 {}'.format(ds.pSeriesLabels[0]), transform=ax.transAxes)
plt.text(0.85, -0.15,'-- {}'.format(ds.pSeriesLabels[1]), transform=ax.transAxes)
plt.grid()
plt.show()
# FFT MTF of Optical System 2
plot_FftMtf(osys2)
# FFT MTF of Optical System 1
plot_FftMtf(osys, 100)
# Close the application
if USE_PYZOS:
app = osys.pTheApplication
app.CloseApplication()
del app
Explanation: Since these objects has not been wrapped (at the time of this writing), we will wrap them first
End of explanation |
703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class Python Demos
I will be using this Notebook for class demos. To use at home, load Anaconda (https
Step1: Let's look at what set() does!
Step2: Let's create a 2nd list and set.
Step3: ...and look at the differences!
Step4: See https | Python Code:
import numpy as np
nums1 = np.random.randint(1,11, 15)
nums1
Explanation: Class Python Demos
I will be using this Notebook for class demos. To use at home, load Anaconda (https://www.continuum.io/downloads) or WinPython (https://winpython.github.io/)
Set() demo
First let's create a random list using the numpy library.
End of explanation
set1 = set(nums1)
set1
Explanation: Let's look at what set() does!
End of explanation
nums2 = np.random.randint(1,11, 12)
nums2
set2 = set(nums2)
set2
Explanation: Let's create a 2nd list and set.
End of explanation
set2.difference(set1)
set1.difference(set2)
Explanation: ...and look at the differences!
End of explanation
# Intersection
set1 & set2
# Union
set1 | set2
# Difference
(set1 - set2) | (set2 - set1)
# Difference method 2
(set1 | set2) - (set1 & set2)
Explanation: See https://en.wikibooks.org/wiki/Python_Programming/Sets for more information about sets!
End of explanation |
704 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Does Python have a function to reduce fractions? | Problem:
import numpy as np
numerator = 98
denominator = 42
gcd = np.gcd(numerator, denominator)
result = (numerator//gcd, denominator//gcd) |
705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we'll study the relation between dimensions and the semantic field of concepts.
To do so, we'll study for a domain specific corpus the repartition of dimensions.
Workflow
build domain
Step1: Std study
We plot here
Step2: Domain selection
We run here the experiment for animal but you can actually run it on any domain specific corpus
Step3: Text8 corpus - Carthesian
Step4: Text8 corpus - Polar
Step5: Wikipedia corpus - Carthesian
Step6: Wikipedia corpus - Polar | Python Code:
domainWordList = [open('../../data/domain/luu_animal.txt').read().splitlines(),
open('../../data/domain/luu_plant.txt').read().splitlines(),
open('../../data/domain/luu_vehicle.txt').read().splitlines()]
def buildCptDf(d, domain, polar=False):
cptList = cpe.buildConceptList(d, domain, True)
if polar:
return pd.DataFrame([c.vect[1:] for c in cptList], index = [c.word for c in cptList])
else:
return pd.DataFrame([c.vect for c in cptList], index = [c.word for c in cptList])
cptDf = buildCptDf(db.DB('../../data/voc/npy/text8_polar.npy'), domainWordList[0], polar=True)
cptDf[:5]
Explanation: In this notebook, we'll study the relation between dimensions and the semantic field of concepts.
To do so, we'll study for a domain specific corpus the repartition of dimensions.
Workflow
build domain
End of explanation
def stdDim(cptDf):
stdSerie = []
for dim in cptDf.columns:
dimensionSerie = cptDf[dim]
stdSerie.append(dimensionSerie.std())
dimensionSerie.plot(kind='kde')
plt.show()
stdSerie = pd.Series(stdSerie)
stdSerie.plot(kind='kde')
return stdSerie.describe()
stdDim(cptDf)
Explanation: Std study
We plot here:
* the std for each dimension
* the std of std for all dimension
End of explanation
domain = domainWordList[0]
Explanation: Domain selection
We run here the experiment for animal but you can actually run it on any domain specific corpus
End of explanation
cptDf = buildCptDf(db.DB('../../data/voc/npy/text8.npy'), domain, polar=False)
stdDim(cptDf)
Explanation: Text8 corpus - Carthesian
End of explanation
cptDf = buildCptDf(db.DB('../../data/voc/npy/text8_polar.npy'), domain, polar=True)
stdDim(cptDf)
Explanation: Text8 corpus - Polar
End of explanation
cptDf = buildCptDf(db.DB('../../data/voc/npy/wikiEn-skipgram.npy'), domain, polar=False)
stdDim(cptDf)
Explanation: Wikipedia corpus - Carthesian
End of explanation
cptDf = buildCptDf(db.DB('../../data/voc/npy/wikiEn-skipgram_polar.npy'), domain, polar=True)
stdDim(cptDf)
Explanation: Wikipedia corpus - Polar
End of explanation |
706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Autoencoders
by Khaled Nasr as a part of a <a href="https
Step1: Creating the autoencoder
Similar to regular neural networks in Shogun, we create a deep autoencoder using an array of NeuralLayer-based classes, which can be created using the utility class NeuralLayers. However, for deep autoencoders there's a restriction that the layer sizes in the network have to be symmetric, that is, the first layer has to have the same size as the last layer, the second layer has to have the same size as the second-to-last layer, and so on. This restriction is necessary for pre-training to work. More details on that can found in the following section.
We'll create a 5-layer deep autoencoder with following layer sizes
Step2: Pre-training
Now we can pre-train the network. To illustrate exactly what's going to happen, we'll give the layers some labels
Step3: Fine-tuning
After pre-training, we can train the autoencoder as a whole to fine-tune the parameters. Training the whole autoencoder is performed using the train() function. Training parameters are controlled through the public attributes, same as a regular neural network.
Step4: Evaluation
Now we can evaluate the autoencoder that we trained. We'll start by providing it with corrupted inputs and looking at how it will reconstruct them. The function reconstruct() is used to obtain the reconstructions
Step5: The figure shows the corrupted examples and their reconstructions. The top half of the figure shows the ones corrupted with multiplicative noise, the bottom half shows the ones corrupted with additive noise. We can see that the autoencoders can provide decent reconstructions despite the heavy noise.
Next we'll look at the weights that the first hidden layer has learned. To obtain the weights, we can call the get_layer_parameters() function, which will return a vector containing both the weights and the biases of the layer. The biases are stored first in the array followed by the weights matrix in column-major format.
Step6: Now, we can use the autoencoder to initialize a supervised neural network. The network will have all the layer of the autoencoder up to (and including) the middle layer. We'll also add a softmax output layer. So, the network will look like
Step7: Next, we'll evaluate the accuracy on the test set
Step8: Convolutional Autoencoders
Convolutional autoencoders [3] are the adaptation of autoencoders to images (or other spacially-structured data). They are built with convolutional layers where each layer consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. In this section we'll pre-train a convolutional network as a stacked autoencoder and use it for classification.
In Shogun, convolutional autoencoders are constructed and trained just like regular autoencoders. Except that we build the autoencoder using CNeuralConvolutionalLayer objects
Step9: Now we'll pre-train the autoencoder
Step10: And then convert the autoencoder to a regular neural network for classification
Step11: And evaluate it on the test set | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat
from shogun import RealFeatures, MulticlassLabels, Math
# load the dataset
dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = dataset['data']
# the usps dataset has the digits labeled from 1 to 10
# we'll subtract 1 to make them in the 0-9 range instead
Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1
# 4000 examples for training
Xtrain = RealFeatures(Xall[:,0:4000])
Ytrain = MulticlassLabels(Yall[0:4000])
# the rest for testing
Xtest = RealFeatures(Xall[:,4000:-1])
Ytest = MulticlassLabels(Yall[4000:-1])
# initialize the random number generator with a fixed seed, for repeatability
Math.init_random(10)
Explanation: Deep Autoencoders
by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn
This notebook illustrates how to train and evaluate a deep autoencoder using Shogun. We'll look at both regular fully-connected autoencoders and convolutional autoencoders.
Introduction
A (single layer) autoencoder is a neural network that has three layers: an input layer, a hidden (encoding) layer, and a decoding layer. The network is trained to reconstruct its inputs, which forces the hidden layer to try to learn good representations of the inputs.
In order to encourage the hidden layer to learn good input representations, certain variations on the simple autoencoder exist. Shogun currently supports two of them: Denoising Autoencoders [1] and Contractive Autoencoders [2]. In this notebook we'll focus on denoising autoencoders.
For denoising autoencoders, each time a new training example is introduced to the network, it's randomly corrupted in some mannar, and the target is set to the original example. The autoencoder will try to recover the orignal data from it's noisy version, which is why it's called a denoising autoencoder. This process will force the hidden layer to learn a good representation of the input, one which is not affected by the corruption process.
A deep autoencoder is an autoencoder with multiple hidden layers. Training such autoencoders directly is usually difficult, however, they can be pre-trained as a stack of single layer autoencoders. That is, we train the first hidden layer to reconstruct the input data, and then train the second hidden layer to reconstruct the states of the first hidden layer, and so on. After pre-training, we can train the entire deep autoencoder to fine-tune all the parameters together. We can also use the autoencoder to initialize a regular neural network and train it in a supervised manner.
In this notebook we'll apply deep autoencoders to the USPS dataset for handwritten digits. We'll start by loading the data and dividing it into a training set and a test set:
End of explanation
from shogun import NeuralLayers, DeepAutoencoder
layers = NeuralLayers()
layers = layers.input(256).rectified_linear(512).rectified_linear(128).rectified_linear(512).linear(256).done()
ae = DeepAutoencoder(layers)
Explanation: Creating the autoencoder
Similar to regular neural networks in Shogun, we create a deep autoencoder using an array of NeuralLayer-based classes, which can be created using the utility class NeuralLayers. However, for deep autoencoders there's a restriction that the layer sizes in the network have to be symmetric, that is, the first layer has to have the same size as the last layer, the second layer has to have the same size as the second-to-last layer, and so on. This restriction is necessary for pre-training to work. More details on that can found in the following section.
We'll create a 5-layer deep autoencoder with following layer sizes: 256->512->128->512->256. We'll use rectified linear neurons for the hidden layers and linear neurons for the output layer.
End of explanation
from shogun import AENT_DROPOUT, NNOM_GRADIENT_DESCENT
ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise
ae.pt_noise_parameter.set_const(0.5) # each input has a 50% chance of being set to zero
ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent
ae.pt_gd_learning_rate.set_const(0.01)
ae.pt_gd_mini_batch_size.set_const(128)
ae.pt_max_num_epochs.set_const(50)
ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing
# uncomment this line to allow the training progress to be printed on the console
#from shogun import MSG_INFO; ae.io.set_loglevel(MSG_INFO)
# start pre-training. this might take some time
ae.pre_train(Xtrain)
Explanation: Pre-training
Now we can pre-train the network. To illustrate exactly what's going to happen, we'll give the layers some labels: L1 for the input layer, L2 for the first hidden layer, and so on up to L5 for the output layer.
In pre-training, an autoencoder will formed for each encoding layer (layers up to the middle layer in the network). So here we'll have two autoencoders: L1->L2->L5, and L2->L3->L4. The first autoencoder will be trained on the raw data and used to initialize the weights and biases of layers L2 and L5 in the deep autoencoder. After the first autoencoder is trained, we use it to transform the raw data into the states of L2. These states will then be used to train the second autoencoder, which will be used to initialize the weights and biases of layers L3 and L4 in the deep autoencoder.
The operations described above are performed by the the pre_train() function. Pre-training parameters for each autoencoder can be controlled using the pt_* public attributes of DeepAutoencoder. Each of those attributes is an SGVector whose length is the number of autoencoders in the deep autoencoder (2 in our case). It can be used to set the parameters for each autoencoder indiviually. SGVector's set_const() method can also be used to assign the same parameter value for all autoencoders.
Different noise types can be used to corrupt the inputs in a denoising autoencoder. Shogun currently supports 2 noise types: dropout noise, where a random portion of the inputs is set to zero at each iteration in training, and gaussian noise, where the inputs are corrupted with random gaussian noise. The noise type and strength can be controlled using pt_noise_type and pt_noise_parameter. Here, we'll use dropout noise.
End of explanation
ae.set_noise_type(AENT_DROPOUT) # same noise type we used for pre-training
ae.set_noise_parameter(0.5)
ae.set_max_num_epochs(50)
ae.set_optimization_method(NNOM_GRADIENT_DESCENT)
ae.set_gd_mini_batch_size(128)
ae.set_gd_learning_rate(0.0001)
ae.set_epsilon(0.0)
# start fine-tuning. this might take some time
_ = ae.train(Xtrain)
Explanation: Fine-tuning
After pre-training, we can train the autoencoder as a whole to fine-tune the parameters. Training the whole autoencoder is performed using the train() function. Training parameters are controlled through the public attributes, same as a regular neural network.
End of explanation
# get a 50-example subset of the test set
subset = Xtest[:,0:50].copy()
# corrupt the first 25 examples with multiplicative noise
subset[:,0:25] *= (random.random((256,25))>0.5)
# corrupt the other 25 examples with additive noise
subset[:,25:50] += random.random((256,25))
# obtain the reconstructions
reconstructed_subset = ae.reconstruct(RealFeatures(subset))
# plot the corrupted data and the reconstructions
figure(figsize=(10,10))
for i in range(50):
ax1=subplot(10,10,i*2+1)
ax1.imshow(subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax1.set_xticks([])
ax1.set_yticks([])
ax2=subplot(10,10,i*2+2)
ax2.imshow(reconstructed_subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax2.set_xticks([])
ax2.set_yticks([])
Explanation: Evaluation
Now we can evaluate the autoencoder that we trained. We'll start by providing it with corrupted inputs and looking at how it will reconstruct them. The function reconstruct() is used to obtain the reconstructions:
End of explanation
# obtain the weights matrix of the first hidden layer
# the 512 is the number of biases in the layer (512 neurons)
# the transpose is because numpy stores matrices in row-major format, and Shogun stores
# them in column major format
w1 = ae.get_layer_parameters(1)[512:].reshape(256,512).T
# visualize the weights between the first 100 neurons in the hidden layer
# and the neurons in the input layer
figure(figsize=(10,10))
for i in range(100):
ax1=subplot(10,10,i+1)
ax1.imshow(w1[i,:].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax1.set_xticks([])
ax1.set_yticks([])
Explanation: The figure shows the corrupted examples and their reconstructions. The top half of the figure shows the ones corrupted with multiplicative noise, the bottom half shows the ones corrupted with additive noise. We can see that the autoencoders can provide decent reconstructions despite the heavy noise.
Next we'll look at the weights that the first hidden layer has learned. To obtain the weights, we can call the get_layer_parameters() function, which will return a vector containing both the weights and the biases of the layer. The biases are stored first in the array followed by the weights matrix in column-major format.
End of explanation
from shogun import NeuralSoftmaxLayer
nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10))
nn.set_max_num_epochs(50)
nn.set_labels(Ytrain)
_ = nn.train(Xtrain)
Explanation: Now, we can use the autoencoder to initialize a supervised neural network. The network will have all the layer of the autoencoder up to (and including) the middle layer. We'll also add a softmax output layer. So, the network will look like: L1->L2->L3->Softmax. The network is obtained by calling convert_to_neural_network():
End of explanation
from shogun import MulticlassAccuracy
predictions = nn.apply_multiclass(Xtest)
accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100
print "Classification accuracy on the test set =", accuracy, "%"
Explanation: Next, we'll evaluate the accuracy on the test set:
End of explanation
from shogun import DynamicObjectArray, NeuralInputLayer, NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR
conv_layers = DynamicObjectArray()
# 16x16 single channel images
conv_layers.append_element(NeuralInputLayer(16,16,1))
# the first encoding layer: 5 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2, 2, 2))
# the second encoding layer: 15 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 20 4x4 feature maps
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2))
# the first decoding layer: same structure as the first encoding layer
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2))
# the second decoding layer: same structure as the input layer
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 1, 2, 2))
conv_ae = DeepAutoencoder(conv_layers)
Explanation: Convolutional Autoencoders
Convolutional autoencoders [3] are the adaptation of autoencoders to images (or other spacially-structured data). They are built with convolutional layers where each layer consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. In this section we'll pre-train a convolutional network as a stacked autoencoder and use it for classification.
In Shogun, convolutional autoencoders are constructed and trained just like regular autoencoders. Except that we build the autoencoder using CNeuralConvolutionalLayer objects:
End of explanation
conv_ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise
conv_ae.pt_noise_parameter.set_const(0.3) # each input has a 30% chance of being set to zero
conv_ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent
conv_ae.pt_gd_learning_rate.set_const(0.002)
conv_ae.pt_gd_mini_batch_size.set_const(100)
conv_ae.pt_max_num_epochs[0] = 30 # max number of epochs for pre-training the first encoding layer
conv_ae.pt_max_num_epochs[1] = 10 # max number of epochs for pre-training the second encoding layer
conv_ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing
# start pre-training. this might take some time
conv_ae.pre_train(Xtrain)
Explanation: Now we'll pre-train the autoencoder:
End of explanation
conv_nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10))
# train the network
conv_nn.set_epsilon(0.0)
conv_nn.set_max_num_epochs(50)
conv_nn.set_labels(Ytrain)
# start training. this might take some time
_ = conv_nn.train(Xtrain)
Explanation: And then convert the autoencoder to a regular neural network for classification:
End of explanation
predictions = conv_nn.apply_multiclass(Xtest)
accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100
print "Classification accuracy on the test set =", accuracy, "%"
Explanation: And evaluate it on the test set:
End of explanation |
707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 훈련 후 float16 양자화
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 모델 훈련 및 내보내기
Step3: 예를 들어, 단일 epoch에 대해서만 모델을 훈련시켰으므로 ~96% 정확성으로만 훈련합니다.
TensorFlow Lite 모델로 변환하기
이제 Python TFLiteConverter를 사용하여 훈련된 모델을 TensorFlow Lite 모델로 변환할 수 있습니다.
TFLiteConverter를 사용하여 모델을 로드합니다.
Step4: .tflite 파일에 작성합니다.
Step5: 대신 모델을 내보낼 때 float16으로 양자화하려면 먼저 기본 최적화를 사용하도록 optimizations 플래그를 지정합니다. 그런 다음 float16이 대상 플랫폼에서 지원되는 유형임을 지정합니다.
Step6: 마지막으로 평소와 같이 모델을 변환합니다. 기본적으로 변환된 모델은 호출 편의를 위해 여전히 float 입력 및 출력을 사용합니다.
Step7: 결과 파일이 약 1/2 크기인지 확인하세요.
Step8: TensorFlow Lite 모델 실행하기
Python TensorFlow Lite 인터프리터를 사용하여 TensorFlow Lite 모델을 실행합니다.
인터프리터에 모델 로드하기
Step9: 하나의 이미지에서 모델 테스트하기
Step10: 모델 평가하기
Step11: float16 양자화된 모델에 대한 평가를 반복하여 다음을 얻습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
Explanation: 훈련 후 float16 양자화
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_float16_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lite/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lite/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lite/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
개요
TensorFlow Lite는 이제 TensorFlow에서 TensorFlow Lite의 flat buffer 형식으로 모델을 변환하는 동안 가중치를 16bit 부동 소수점 값으로 변환하는 것을 지원합니다. 그 결과 모델 크기가 2배 감소합니다. GPU와 같은 일부 하드웨어는 감소한 정밀도 산술로 기본적으로 계산할 수 있으므로 기존 부동 소수점 실행보다 속도가 향상됩니다. Tensorflow Lite GPU 대리자는 이러한 방식으로 실행되도록 구성될 수 있습니다. 그러나 float16 가중치로 변환된 모델은 추가 수정없이도 CPU에서 계속 실행될 수 있습니다. float16 가중치는 첫 번째 추론 이전에 float32로 업 샘플링됩니다. 이를 통해 지연 시간과 정확성에 미치는 영향을 최소화하는 대신 모델 크기를 크게 줄일 수 있습니다.
이 가이드에서는 MNIST 모델을 처음부터 훈련하고 TensorFlow에서 정확성을 확인한 다음 모델을 float16 양자화를 사용하여 Tensorflow Lite flatbuffer로 변환합니다. 마지막으로 변환된 모델의 정확성을 확인하고 원래 float32 모델과 비교합니다.
MNIST 모델 빌드하기
설정
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
Explanation: 모델 훈련 및 내보내기
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: 예를 들어, 단일 epoch에 대해서만 모델을 훈련시켰으므로 ~96% 정확성으로만 훈련합니다.
TensorFlow Lite 모델로 변환하기
이제 Python TFLiteConverter를 사용하여 훈련된 모델을 TensorFlow Lite 모델로 변환할 수 있습니다.
TFLiteConverter를 사용하여 모델을 로드합니다.
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: .tflite 파일에 작성합니다.
End of explanation
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
Explanation: 대신 모델을 내보낼 때 float16으로 양자화하려면 먼저 기본 최적화를 사용하도록 optimizations 플래그를 지정합니다. 그런 다음 float16이 대상 플랫폼에서 지원되는 유형임을 지정합니다.
End of explanation
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
Explanation: 마지막으로 평소와 같이 모델을 변환합니다. 기본적으로 변환된 모델은 호출 편의를 위해 여전히 float 입력 및 출력을 사용합니다.
End of explanation
!ls -lh {tflite_models_dir}
Explanation: 결과 파일이 약 1/2 크기인지 확인하세요.
End of explanation
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))
interpreter_fp16.allocate_tensors()
Explanation: TensorFlow Lite 모델 실행하기
Python TensorFlow Lite 인터프리터를 사용하여 TensorFlow Lite 모델을 실행합니다.
인터프리터에 모델 로드하기
End of explanation
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter_fp16.get_input_details()[0]["index"]
output_index = interpreter_fp16.get_output_details()[0]["index"]
interpreter_fp16.set_tensor(input_index, test_image)
interpreter_fp16.invoke()
predictions = interpreter_fp16.get_tensor(output_index)
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
Explanation: 하나의 이미지에서 모델 테스트하기
End of explanation
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
Explanation: 모델 평가하기
End of explanation
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(evaluate_model(interpreter_fp16))
Explanation: float16 양자화된 모델에 대한 평가를 반복하여 다음을 얻습니다.
End of explanation |
708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BandsPlot
Step1: Let's get a bands_plot from a .bands file
Step2: and see what we've got
Step3: Getting the bands that you want
By default, BandsPlot gives you the 15 bands below and above 0 eV (which is interpreted as the fermi level).
There are two main ways to specify the bands that you want to display
Step4: while with bands_range you can actually indicate the indices.
However, note that Erange has preference over bands_range, therefore you need to set Erange to None if you want the change to take effect.
Step5: If your fermi level is not correctly set or you want a different energy reference, you can provide a value for E0 to specify where your 0 should be and the bands to display will be automatically calculated from that.
However, if you want to update E0 after the plot has been build and you want BandsPlot to recalculate the bands for you you will need to set Erange and bands_range to None again.
Step6: Notice how only 25 bands are displayed now
Step7: Notice that in spin polarized bands, you can select the spins to display using the spin setting, just pass a list of spin components (e.g. spin=[0]).
Quick styling
If all you want is to change the color and width of the bands, there's one simple solution
Step8: And now in green but also make them wider
Step9: If you have spin polarized bands, bands_color will modify the color of the first spin channel, while the second one can be tuned with spindown_color.
Step10: Displaying the smallest gaps
The easiest thing to do is to let BandsPlot discover where the (minimum) gaps are.
This is indicated by setting the gap parameter to True. One can also use gap_color if a particular color is desired.
Step11: This displays the minimum gaps. However there may be some issues with it
Step12: This example is not meaningful for gap_tol, but it is illustrative of what gap_tol does. It is the minimum k-distance between two points to consider them "the same point" in the sense that only one of them will be used to show the gap. In this case, if we set gap_tol all the way up to 3, the plot will consider the two gamma points to be part of the same "point" and therefore it will only show the gap once.
Step13: This is not what gap_tol is meant for, since it is thought to remediate the effect of locally flat bands, but still you can get the idea of what it does.
Step14: Displaying custom gaps
If you are not happy with the gaps that the plot is displaying for you or you simply want gaps that are not the smallest ones, you can always use custom_gaps.
Custom gaps should be a list where each item specifies how to draw that given gap. See the setting's help message
Step15: So, for example, if we want to plot the gamma-gamma gap
Step16: Notice how we got the gap probably not where we wanted, since it would be better to have it in the middle Gamma point, which is more visible. As the help message of custom_gaps states, you can also pass the K value instead of a label.
Now, you'll be happy to know that you can easily access the k values of all labels, as they are stored as attributes in the bands dataarray, which you can find in bands_plot.bands
Step17: Now all we need to do is to grab the value for the second gamma point
Step18: And use it to build a custom gap
Step19: Individual band styling
The bands_color and bands_width should be enough for most uses. However, you may want to style each band differently. Since we can not support every possible case, you can pass a function to the add_band_data. Here's the help message
Step21: You can build a dummy function to print the band and see how it looks like. Notice that you only get those bands that are inside the range specified for the plot, therefore the first band here is band 11!
Step23: Just as an educational example, we are going to style the bands according to this conditions
Step24: Displaying spin texture
If your bands plot comes from a non-colinear spin calculation (or is using a Hamiltonian with non-colinear spin), you can pass "x", "y" or "z" to the spin setting in order to get a display of the spin texture.
Let's read in a hamiltonian coming from a spin orbit SIESTA calculation, which is obtained from this fantastic spin texture tutorial
Step25: Generate the path for our band structure
Step26: And finally generate the plot
Step27: These are the bands, now let's ask for a particular spin texture
Step28: And let's change the colorscale for the spin texture
Step29: We hope you enjoyed what you learned!
This next cell is just to create the thumbnail for the notebook in the docs | Python Code:
import sisl
import sisl.viz
# This is just for convenience to retreive files
siesta_files = sisl._environ.get_environ_variable("SISL_FILES_TESTS") / "sisl" / "io" / "siesta"
Explanation: BandsPlot
End of explanation
bands_plot = sisl.get_sile( siesta_files / "SrTiO3.bands").plot()
Explanation: Let's get a bands_plot from a .bands file
End of explanation
bands_plot
Explanation: and see what we've got:
End of explanation
bands_plot.update_settings(Erange=[-10, 10])
Explanation: Getting the bands that you want
By default, BandsPlot gives you the 15 bands below and above 0 eV (which is interpreted as the fermi level).
There are two main ways to specify the bands that you want to display: Erange and bands_range.
As you may have guessed, Erange specifies the energy range that is displayed:
End of explanation
bands_plot.update_settings(bands_range=[6, 15], Erange=None)
Explanation: while with bands_range you can actually indicate the indices.
However, note that Erange has preference over bands_range, therefore you need to set Erange to None if you want the change to take effect.
End of explanation
bands_plot.update_settings(E0=-10, bands_range=None, Erange=None)
Explanation: If your fermi level is not correctly set or you want a different energy reference, you can provide a value for E0 to specify where your 0 should be and the bands to display will be automatically calculated from that.
However, if you want to update E0 after the plot has been build and you want BandsPlot to recalculate the bands for you you will need to set Erange and bands_range to None again.
End of explanation
# Set them back to "normal"
bands_plot = bands_plot.update_settings(E0=0, bands_range=None, Erange=None)
Explanation: Notice how only 25 bands are displayed now: the only 10 that are below 0 eV (there are no lower states) and 15 above 0 eV.
End of explanation
bands_plot.update_settings(bands_color="red")
Explanation: Notice that in spin polarized bands, you can select the spins to display using the spin setting, just pass a list of spin components (e.g. spin=[0]).
Quick styling
If all you want is to change the color and width of the bands, there's one simple solution: use the bands_color and bands_width settings.
Let's show them in red:
End of explanation
bands_plot.update_settings(bands_color="green", bands_width=3)
Explanation: And now in green but also make them wider:
End of explanation
bands_plot = bands_plot.update_settings(bands_color="black", bands_width=1)
Explanation: If you have spin polarized bands, bands_color will modify the color of the first spin channel, while the second one can be tuned with spindown_color.
End of explanation
bands_plot.update_settings(gap=True, gap_color="green", Erange=[-10,10]) # We reduce Erange just to see it better
Explanation: Displaying the smallest gaps
The easiest thing to do is to let BandsPlot discover where the (minimum) gaps are.
This is indicated by setting the gap parameter to True. One can also use gap_color if a particular color is desired.
End of explanation
bands_plot.update_settings(direct_gaps_only=True)
Explanation: This displays the minimum gaps. However there may be some issues with it: it will show all gaps with the minimum value. That is, if you have repeated points in the brillouin zone it will display multiple gaps that are equivalent.
What's worse, if the region where your gap is is very flat, two consecutive points might have the same energy. Multiple gaps will be displayed one glued to another.
To help cope with this issues, you have the direct_gaps_only and gap_tol.
In this case, since we have no direct gaps, setting direct_gaps_only will hide them all:
End of explanation
bands_plot.update_settings(direct_gaps_only=False, gap_tol=3)
Explanation: This example is not meaningful for gap_tol, but it is illustrative of what gap_tol does. It is the minimum k-distance between two points to consider them "the same point" in the sense that only one of them will be used to show the gap. In this case, if we set gap_tol all the way up to 3, the plot will consider the two gamma points to be part of the same "point" and therefore it will only show the gap once.
End of explanation
bands_plot = bands_plot.update_settings(gap=False, gap_tol=0.01)
Explanation: This is not what gap_tol is meant for, since it is thought to remediate the effect of locally flat bands, but still you can get the idea of what it does.
End of explanation
print(bands_plot.get_param("custom_gaps").help)
Explanation: Displaying custom gaps
If you are not happy with the gaps that the plot is displaying for you or you simply want gaps that are not the smallest ones, you can always use custom_gaps.
Custom gaps should be a list where each item specifies how to draw that given gap. See the setting's help message:
End of explanation
bands_plot.update_settings(custom_gaps=[{"from": "Gamma", "to": "Gamma", "color": "red"}])
Explanation: So, for example, if we want to plot the gamma-gamma gap:
End of explanation
bands_plot.bands.attrs
Explanation: Notice how we got the gap probably not where we wanted, since it would be better to have it in the middle Gamma point, which is more visible. As the help message of custom_gaps states, you can also pass the K value instead of a label.
Now, you'll be happy to know that you can easily access the k values of all labels, as they are stored as attributes in the bands dataarray, which you can find in bands_plot.bands:
End of explanation
gap_k = None
for val, label in zip(bands_plot.bands.attrs["ticks"], bands_plot.bands.attrs["ticklabels"]):
if label == "Gamma":
gap_k = val
gap_k
Explanation: Now all we need to do is to grab the value for the second gamma point:
End of explanation
bands_plot.update_settings(custom_gaps=[{"from": gap_k, "to": gap_k, "color": "orange"}])
Explanation: And use it to build a custom gap:
End of explanation
print(bands_plot.get_param("add_band_data").help)
Explanation: Individual band styling
The bands_color and bands_width should be enough for most uses. However, you may want to style each band differently. Since we can not support every possible case, you can pass a function to the add_band_data. Here's the help message:
End of explanation
def add_band_data(band, self):
Dummy function to see the band DataArray
if band.band == 11:
print(band)
return {}
bands_plot.update_settings(add_band_data=add_band_data)
Explanation: You can build a dummy function to print the band and see how it looks like. Notice that you only get those bands that are inside the range specified for the plot, therefore the first band here is band 11!
End of explanation
import numpy as np
def draw_gradient(band, self):
Takes a band and styles it according to its energy dispersion.
NOTE: If it's to far from the fermi level, it fades it in purple for additional coolness.
dist_from_Ef = np.max(abs(band))
if dist_from_Ef < 5:
return {
"mode": "lines+markers",
"marker_size": np.abs(np.gradient(band))*40,
}
else:
return {
"line_color": "purple",
"line_dash": "dot",
"opacity": 1-float(dist_from_Ef/10)
}
bands_plot.update_settings(add_band_data=draw_gradient)
Explanation: Just as an educational example, we are going to style the bands according to this conditions:
- If the band is +- 5 eV within the fermi level, we are going to draw markers whose size is proportional to the gradient of the band at each point.
- Otherwise, we will just display the bands as purple dotted lines that fade as we get far from the fermi level (just because we can!)
Note: Of course, to modify traces, one must have some notion of how plotly traces work. Just hit plotly's visual reference page https://plotly.com/python/ for inspiration.
End of explanation
H = sisl.get_sile(siesta_files / "Bi2D_BHex.TSHS").read_hamiltonian()
H.spin.is_spinorbit
Explanation: Displaying spin texture
If your bands plot comes from a non-colinear spin calculation (or is using a Hamiltonian with non-colinear spin), you can pass "x", "y" or "z" to the spin setting in order to get a display of the spin texture.
Let's read in a hamiltonian coming from a spin orbit SIESTA calculation, which is obtained from this fantastic spin texture tutorial:
End of explanation
band_struct = sisl.BandStructure(H, point=[[1./2, 0., 0.], [0., 0., 0.],
[1./3, 1./3, 0.], [1./2, 0., 0.]],
division=301,
name=['M', r'$\Gamma$', 'K', 'M'])
Explanation: Generate the path for our band structure:
End of explanation
spin_texture_plot = band_struct.plot(Erange=[-2,2])
spin_texture_plot
Explanation: And finally generate the plot:
End of explanation
spin_texture_plot.update_settings(spin="x", bands_width=3)
Explanation: These are the bands, now let's ask for a particular spin texture:
End of explanation
spin_texture_plot.update_settings(backend="plotly", spin_texture_colorscale="temps")
Explanation: And let's change the colorscale for the spin texture:
End of explanation
thumbnail_plot = spin_texture_plot
if thumbnail_plot:
thumbnail_plot.show("png")
Explanation: We hope you enjoyed what you learned!
This next cell is just to create the thumbnail for the notebook in the docs
End of explanation |
709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x=np.linspace(-1.0,1.0,size)
N=np.empty(size)
if sigma==0.0: #I received some help from classmates here
y=m*x+b
else:
for i in range(size):
N[i]=np.random.normal(0,sigma**2)
y=m*x+b+N
return(x,y)
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x=np.linspace(-1.0,1.0,size)
N=np.empty(size)
if sigma==0.0:
y=m*x+b
else:
for i in range(size):
N[i]=np.random.normal(0,sigma**2)
y=m*x+b+N
plt.figure(figsize=(9,6))
plt.scatter(x,y,color=color)
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
plt.box(False)
plt.grid(True)
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line,m=(-10.0,10.0,0.1), b=(-5.0,5.0,0.1), sigma=(0.0,5.0,0.01),size=(10,100,10),color=('r','b','g'))
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
More SQL DML and DDL
Data and environment setup
Step1: SQL JOINs
Note first that a simple JOIN without further specification of common attributes will result in a cross product relation. We see this by examining two tables and then taking a simple JOIN.
Step2: Now we add the join, and we see the Cartesian product of both tables. All possible combinations are produced in the resulting relation.
Step3: Ordinarily we don't want to do this, but it's important to keep in mind that this is how things work under the hood.
More typically, we'll specify at least one pair of common attributes between the two tables to align them properly.
Step4: Looks much neater, right?
We can of course combine this with naming specific attributes to SELECT (project).
Step5: There's a shorthand form of the JOIN statement that you will often see
Step6: Which you use can be a matter of personal preference or style. For simple queries, this reads very clearly for me. For more complex queries with more attribute selection conditions in the WHERE clause beyond specifying attributes to JOIN on, it can be better to split them up.
Compare
Step7: The above two queries are identical logically, but they read differently.
Finally, we can combine several tables at once - not just two! This works by adding more tables into the JOIN operation.
Step8: Subqueries
We can use the results from one query to constrain conditions on another. Take for example the Person table
Step9: This represents a set, which we know we can use with the IN clause.
Step10: But imagine a case where there are dozens, hundreds, or even thousands of possible values. You want to select carefully, without having to enumerate those possible values. This is where subqueries work best.
Step11: With this simple approach, you can expand in all directions as you might guess. For example, let's add further attribute constraint conditions on both the main query and within the subquery.
Step12: This kind of nested subquery is extremely useful and is used quite often.
Data definition (DDL)
We're going to set up a similar database ourselves, so let's start a new one to avoid messing up the tutorial database.
Step13: When creating new tables, we often include a DROP TABLE command so that when the code is repeated it performs each CREATE TABLE cleanly. The cleanest way to do this is with DROP TABLE IF EXISTS, which typically won't raise an error if the table doesn't already exist.
Step14: Now we can insert new records into our new tables.
Step15: We can do the same thing in a single INSERT statement, too. Note how this time we are specifying our own order of attributes, and our values match that order. Otherwise, we'd have to follow the schema definition order.
Step16: With INSERT, we can also use subqueries.
Step17: More DML | Python Code:
!wget http://files.software-carpentry.org/survey.db
!pip install ipython-sql
%load_ext sql
%sql sqlite:///survey.db
Explanation: More SQL DML and DDL
Data and environment setup
End of explanation
%%sql
SELECT *
FROM Site;
%%sql
SELECT *
FROM Visited;
Explanation: SQL JOINs
Note first that a simple JOIN without further specification of common attributes will result in a cross product relation. We see this by examining two tables and then taking a simple JOIN.
End of explanation
%%sql
SELECT *
FROM Site
JOIN Visited;
Explanation: Now we add the join, and we see the Cartesian product of both tables. All possible combinations are produced in the resulting relation.
End of explanation
%%sql
SELECT *
FROM Site
JOIN Visited
ON Site.name = Visited.site;
Explanation: Ordinarily we don't want to do this, but it's important to keep in mind that this is how things work under the hood.
More typically, we'll specify at least one pair of common attributes between the two tables to align them properly.
End of explanation
%%sql
SELECT Site.lat, Site.long, Visited.dated
FROM Site
JOIN Visited
ON Site.name = Visited.site;
Explanation: Looks much neater, right?
We can of course combine this with naming specific attributes to SELECT (project).
End of explanation
%%sql
SELECT Site.lat, Site.long, Visited.dated
FROM Site, Visited
WHERE Site.name = Visited.site;
Explanation: There's a shorthand form of the JOIN statement that you will often see: the common attributes can be specified in the WHERE clause.
End of explanation
%%sql
SELECT Site.lat, Site.long, Visited.dated
FROM Site, Visited
WHERE Site.name = Visited.site
AND Visited.dated IS NOT NULL
AND Site.lat < -48
AND Site.long > -128;
%%sql
SELECT Site.lat, Site.long, Visited.dated
FROM Site
JOIN Visited
ON Site.name = Visited.site
WHERE Visited.dated IS NOT NULL
AND Site.lat < -48
AND Site.long > -128;
Explanation: Which you use can be a matter of personal preference or style. For simple queries, this reads very clearly for me. For more complex queries with more attribute selection conditions in the WHERE clause beyond specifying attributes to JOIN on, it can be better to split them up.
Compare:
End of explanation
%%sql
SELECT Site.lat, Site.long, Visited.dated, Survey.quant, Survey.reading
FROM Site
JOIN Visited
ON Site.name = Visited.site
JOIN Survey
ON Visited.ident = Survey.taken
WHERE Visited.dated IS NOT NULL;
Explanation: The above two queries are identical logically, but they read differently.
Finally, we can combine several tables at once - not just two! This works by adding more tables into the JOIN operation.
End of explanation
%%sql
SELECT DISTINCT ident
FROM person;
Explanation: Subqueries
We can use the results from one query to constrain conditions on another. Take for example the Person table:
End of explanation
%%sql
SELECT *
FROM survey
WHERE person IN ('dyer', 'pb', 'lake', 'row', 'danforth')
Explanation: This represents a set, which we know we can use with the IN clause.
End of explanation
%%sql
SELECT *
FROM survey;
%%sql
SELECT *
FROM survey
WHERE person IN
(SELECT DISTINCT ident
FROM person);
Explanation: But imagine a case where there are dozens, hundreds, or even thousands of possible values. You want to select carefully, without having to enumerate those possible values. This is where subqueries work best.
End of explanation
%%sql
SELECT *
FROM survey
WHERE person IN
(SELECT DISTINCT ident
FROM person
WHERE personal = 'Frank')
AND reading > 7;
Explanation: With this simple approach, you can expand in all directions as you might guess. For example, let's add further attribute constraint conditions on both the main query and within the subquery.
End of explanation
%sql sqlite:///demo.db
Explanation: This kind of nested subquery is extremely useful and is used quite often.
Data definition (DDL)
We're going to set up a similar database ourselves, so let's start a new one to avoid messing up the tutorial database.
End of explanation
%%sql
DROP TABLE IF EXISTS Person;
CREATE TABLE Person(
identity TEXT,
personal TEXT,
family TEXT);
DROP TABLE IF EXISTS Site;
CREATE TABLE Site(
name TEXT,
lat REAL,
long REAL);
DROP TABLE IF EXISTS Visited;
CREATE TABLE Visited(
ident INTEGER,
site TEXT,
dated TEXT);
DROP TABLE IF EXISTS Survey;
CREATE TABLE Survey(
taken INTEGER,
person TEXT,
quant REAL,
reading REAL);
Explanation: When creating new tables, we often include a DROP TABLE command so that when the code is repeated it performs each CREATE TABLE cleanly. The cleanest way to do this is with DROP TABLE IF EXISTS, which typically won't raise an error if the table doesn't already exist.
End of explanation
%%sql
INSERT INTO Site values('DR-1', -49.85, -128.57);
INSERT INTO Site values('DR-3', -47.15, -126.72);
INSERT INTO Site values('MSK-4', -48.87, -123.40);
SELECT * FROM Site;
Explanation: Now we can insert new records into our new tables.
End of explanation
%%sql
DELETE FROM Site;
INSERT INTO Site (lat, long, name)
VALUES
(-49.85, -128.57, 'DR-1'),
(-47.15, -126.72, 'DR-3'),
(-48.87, -123.40, 'MSK-4')
;
SELECT * FROM Site;
Explanation: We can do the same thing in a single INSERT statement, too. Note how this time we are specifying our own order of attributes, and our values match that order. Otherwise, we'd have to follow the schema definition order.
End of explanation
%%sql
CREATE TABLE JustLatLong(lat text, long text);
INSERT INTO JustLatLong SELECT lat, long FROM Site;
SELECT * FROM JustLatLong;
Explanation: With INSERT, we can also use subqueries.
End of explanation
%%sql
SELECT *
FROM Site
WHERE name = 'MSK-4';
%%sql
UPDATE Site
SET lat = -48.87, long = -125.40
WHERE name = 'MSK-4';
%%sql
SELECT *
FROM Site
WHERE name = 'MSK-4';
%%sql
SELECT *
FROM Site;
%%sql
DELETE FROM Site
WHERE name = 'DR-3';
%%sql
SELECT *
FROM Site;
%%sql
UPDATE Site
SET lat = -50;
%%sql
SELECT *
FROM Site;
%%sql
DELETE FROM Site;
%%sql
SELECT *
FROM Site;
Explanation: More DML: UPDATE and DELETE
UPDATE and DELETE statements are powerful as their changes apply immediately across an entire relation. They are often written with specific constraints that limit their potential effect to precise conditions.
End of explanation |
711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feedback k domácím projektům
Najdi chybu 1
Tento kousek kódu, který se stará o výběr tahu počítače na základě náhodně vygenerovaného čísla, může vypada na první pohled správně, ale ve skutečnosti jej stačí párkrát spustit a chybička se projeví.
Step1: Správné řešení
Chybička byla v záměně elif za další if což způsobilo rozdělení jedné podmínky se třemi větvemi na dvě samostatné podmínky, z nichž ta první měla jen jednu větev (jen jeden if) a ta druhá dvě (jeden if a jeden else).
Step2: Najdu chybu 2
Copak se asi stane s proměnnou strana ještě před výpočtem povrchu a objemu?
Step3: Šťastná - bohatá
Několik možných řešení programu šťastná-bohatá. Všechny dělají totéž, ale některé jsou zkrátka lépe čitelné a kompaktnější.
Řešení 1
Step4: Řešení 2
Step5: Řešení 3
Step6: Jak tohle vylepšit?
Tato konkrétní implementace Kámen, nůžky, papír pomocí and/or jde ještě o kus zkrátit a vylepšit. Najdeš lepší řešení?
Step7: Lepší řešení pro plichtu | Python Code:
from random import randrange
cislo = randrange(2)
if cislo == 0:
tah_pocitace = "kámen"
print("Počítač vybral kámen.")
if cislo == 1:
print("Počítač vybral nůžky.")
tah_pocitace = "nůžky"
else:
tah_pocitace = "papír"
print("Počítač vybral papír.")
Explanation: Feedback k domácím projektům
Najdi chybu 1
Tento kousek kódu, který se stará o výběr tahu počítače na základě náhodně vygenerovaného čísla, může vypada na první pohled správně, ale ve skutečnosti jej stačí párkrát spustit a chybička se projeví.
End of explanation
from random import randrange
cislo = randrange(2)
if cislo == 0:
tah_pocitace = "kámen"
print("Počítač vybral kámen.")
elif cislo == 1:
print("Počítač vybral nůžky.")
tah_pocitace = "nůžky"
else:
tah_pocitace = "papír"
print("Počítač vybral papír.")
Explanation: Správné řešení
Chybička byla v záměně elif za další if což způsobilo rozdělení jedné podmínky se třemi větvemi na dvě samostatné podmínky, z nichž ta první měla jen jednu větev (jen jeden if) a ta druhá dvě (jeden if a jeden else).
End of explanation
strana = int(input('Zadej velikost strany v cm: '))
strana = 2852
print('Objem krychle o straně',strana,'cm je', strana**3,'cm3')
print('Obsah krychle o straně',strana,'cm je', 6*strana**2,'cm2')
Explanation: Najdu chybu 2
Copak se asi stane s proměnnou strana ještě před výpočtem povrchu a objemu?
End of explanation
print('Odpovídej "ano" nebo "ne".')
stastna_retezec = input('Jsi šťastná?')
bohata_retezec = input('Jsi bohatá?')
if stastna_retezec == 'ano':
if bohata_retezec == 'ano':
print ("ty se máš")
elif bohata_retezec == 'ne':
print ("zkus mín utrácet")
elif stastna_retezec == 'ne':
if bohata_retezec == 'ano':
print ("zkus se víc usmívat")
elif bohata_retezec == 'ne':
print ("to je mi líto")
else:
print ("Nerozumím.")
Explanation: Šťastná - bohatá
Několik možných řešení programu šťastná-bohatá. Všechny dělají totéž, ale některé jsou zkrátka lépe čitelné a kompaktnější.
Řešení 1
End of explanation
print('Odpovídej "ano" nebo "ne".')
stastna_retezec = input('Jsi šťastná?')
bohata_retezec = input('Jsi bohatá?')
if stastna_retezec == 'ano' and bohata_retezec == 'ano':
print ("Grauluji")
elif stastna_retezec == 'ano' and bohata_retezec == 'ne':
print('Zkus míň utrácet.')
elif stastna_retezec == 'ne' and bohata_retezec == 'ano':
print ("zkus se víc usmívat")
elif stastna_retezec == 'ne' and bohata_retezec == 'ne':
print ("to je mi líto")
else:
print ("Nerozumim")
Explanation: Řešení 2
End of explanation
print('Odpovídej "ano" nebo "ne".')
stastna_retezec = input('Jsi šťastná? ')
if stastna_retezec == 'ano':
stastna = True
elif stastna_retezec == 'ne':
stastna = False
else:
print('Nerozumím!')
bohata_retezec = input('Jsi bohatá? ')
if bohata_retezec == 'ano':
bohata = True
elif bohata_retezec == 'ne':
bohata = False
else:
print('Nerozumím!')
if bohata and stastna:
print('Gratuluji!')
elif bohata:
print('Zkus se víc usmívat.')
elif stastna:
print('Zkus míň utrácet.')
else:
print('To je mi líto.')
Explanation: Řešení 3
End of explanation
from random import randrange
cislo = randrange(3)
if cislo == 0:
tah_pocitace='kámen'
elif cislo == 1:
tah_pocitace = 'nůžky'
else:
tah_pocitace = 'papír'
tah_cloveka = input('kámen, nůžky, nebo papír? ')
if (tah_cloveka == "kámen" and tah_pocitace == "kámen") or (tah_cloveka=="nůžky" and tah_pocitace == "nůžky") or (tah_cloveka=="papír" and tah_pocitace=="papír"):
print("Plichta")
elif (tah_cloveka == "kámen" and tah_pocitace == "nůžky") or (tah_cloveka=="nůžky" and tah_pocitace == "papír") or (tah_cloveka=="papír" and tah_pocitace=="kámen"):
print("Vyhrál jsi")
elif (tah_cloveka == "kámen" and tah_pocitace == "papír") or (tah_cloveka=="nůžky" and tah_pocitace == "kámen") or (tah_cloveka=="papír" and tah_pocitace=="nůžky"):
print("Prohrál jsi!")
else:
print('Nerozumím.')
Explanation: Jak tohle vylepšit?
Tato konkrétní implementace Kámen, nůžky, papír pomocí and/or jde ještě o kus zkrátit a vylepšit. Najdeš lepší řešení?
End of explanation
from random import randrange
cislo = randrange(3)
if cislo == 0:
tah_pocitace='kámen'
elif cislo == 1:
tah_pocitace = 'nůžky'
else:
tah_pocitace = 'papír'
tah_cloveka = input('kámen, nůžky, nebo papír? ')
if tah_cloveka == tah_pocitace:
print("Plichta")
elif (tah_cloveka == "kámen" and tah_pocitace == "nůžky") or (tah_cloveka=="nůžky" and tah_pocitace == "papír") or (tah_cloveka=="papír" and tah_pocitace=="kámen"):
print("Vyhrál jsi")
elif (tah_cloveka == "kámen" and tah_pocitace == "papír") or (tah_cloveka=="nůžky" and tah_pocitace == "kámen") or (tah_cloveka=="papír" and tah_pocitace=="nůžky"):
print("Prohrál jsi!")
else:
print('Nerozumím.')
Explanation: Lepší řešení pro plichtu
End of explanation |
712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration and derivatives
Step1: Derivatives and Integrals
Step2: Numerical Derivatives
Step3: Note that the values are pretty close, and we can use a print('{0
Step4: First Order Ordinary Differential Equations
Step5: Starting with the ODE
Step6: To leverage SciPy's odeint function, a python function is used to represent the ODE
Step7: However, for reasons that will become clear for higher order ODEs, it makes sense to use a change of variables
Step8: Second Order Ordinary Differential Equations
Step9: Equivalently, $ z = [\theta, \omega] $ can be used as an array without referencing $ [\theta, \omega] $ explicitly, saving a couple of lines
Step10: Coupled Ordinary Differential Equations | Python Code:
# Python imports
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.misc import derivative
from scipy import integrate
Explanation: Integration and derivatives:
Many systems are represented dynamically and present the need for some simple calculus. Derivatives and integrals are important for these systems.
By the end of this file you should have seen simple numerical examples of:
Derivatives
Integrals
First order differential equations
Second order differential equations
First order coupled differential equations
Further reading:
http://mathinsight.org/ordinary_differential_equation_introduction
http://www.scipy-lectures.org/scipy.html
https://github.com/scipy/scipy/blob/v0.19.0/scipy/integrate/odepack.py#L25-L230
End of explanation
# Use sine function to demonstrate derivatives and integrals
X_Pi = np.linspace(0,np.pi,100)
X_2Pi = np.linspace(0,2*np.pi,200)
plt.plot(X_2Pi, np.sin(X_2Pi), label="0 - 2 Pi")
plt.plot(X_Pi, np.sin(X_Pi), linestyle="dashed", color='red',linewidth=3, label="0 - Pi" )
plt.legend(loc='best')
plt.xlabel("x")
plt.ylabel("y");
Explanation: Derivatives and Integrals:
End of explanation
result = derivative(np.sin,np.pi,dx=1e-6) #Specifying the spacing helps get a more accurate answer
print("The derivative of sin(x) at x = pi is {0}".format(result))
result = derivative(np.sin,2*np.pi,dx=1e-6)
print("The derivative of sin(x) at x = 2pi is {0}".format(result))
Explanation: Numerical Derivatives:
By hand:
$\frac{d}{dx} Sin(x) = Cos(x)$
We can try at two different points:
$\frac{d}{dx} Sin(\pi) = -1$
$\frac{d}{dx} Sin(2\pi) = 1$
End of explanation
result, error = integrate.quad(np.sin,0,np.pi)
print("The integral of sine from 0 to pi is {0:.3g}".format(result))
result, error = integrate.quad(np.sin,0,2*np.pi)
print("The integral of sine from 0 to 2 pi is {0}".format(result))
# This is effectively zero
Explanation: Note that the values are pretty close, and we can use a print('{0:.3g}'.format(result) to chop the trailing values.
Numerical Integrals
By hand:
$\int_0^1 Sin(x)dx = Cos(0)-Cos(1)$
$=2$
Using the built-in function, a technique from the fortran library QUADPACK (http://nines.cs.kuleuven.be/software/QUADPACK/):
The error output comes from the fortran library, and is an estimate on the upper bound of absolute error in the result.
End of explanation
from scipy.integrate import odeint
Explanation: First Order Ordinary Differential Equations:
These are differential equations that contain at least one independent variable and its derivatives. Ubiquitous in physics problems, a common example is a body that has both acceleration and velocity.
End of explanation
#Perform the calculation by hand:
X = np.linspace(0,2*np.pi,1000000)
Y_hand = 1/2.*(np.sin(X) - np.cos(X) + np.exp(-X))
plt.plot(X, Y_hand, label="y by hand" )
plt.show()
Explanation: Starting with the ODE:
$ \frac{dy}{dx} + y = Sin(x) $
and
$ y(0) = 0 $
The solution is:
$ y = \frac{1}{2}(Sin(x) - Cos(x) + e^{-x}) $
To see how this solution was determined by hand, please see Appendix I.
End of explanation
# Define the ODE as function, use odeint to determine numerical solution
def dy_dx(y, x):
return np.sin(x) - y
Explanation: To leverage SciPy's odeint function, a python function is used to represent the ODE:
$ \frac{dy}{dx} = Sin(x) - y $
End of explanation
# Define the ODE as function, use odeint to determine numerical solution
def dy_dx(z, x):
theta = z
return np.sin(x) - theta
# Use the ODE function odeint
Y_calc = odeint(dy_dx, 0, X)
Y_calc = np.array(Y_calc).flatten()
# Plot
plt.plot(X, Y_hand, label="y by hand")
plt.plot(X, Y_calc, linestyle="dashed", color='red',linewidth=3, \
label="Python odeint")
plt.legend(loc='best')
plt.show()
Explanation: However, for reasons that will become clear for higher order ODEs, it makes sense to use a change of variables:
$ \frac{dy}{dx} = Sin(x) -\theta $
where:
$ \theta = y $
For odeint, the variable $z$ can be used to represent the variable of integration:
$z = [\theta]$
End of explanation
X = np.linspace(0, 1, 100)
def dy2_dx(z, x):
theta = z[0]
omega = z[1]
f0 = -3*omega - 2*theta # Define a relation in terms of first
# order ODEs
return (omega, f0) # Output increasing orders
y0 = (0,1) # Initial conditions at x = 0 (y=0 and y'=1)
dy_dxs = odeint(dy2_dx, y0, X) # Output the solution to dy2_dx, or [y, y']
# Plot
plt.plot(X, -1*np.exp(-2*X) + np.exp(-X), label="y by hand")
plt.plot(X, dy_dxs[:, 0], label="y", linestyle="dotted", color='green', linewidth=4)
plt.plot(X, 2*np.exp(-2*X) - np.exp(-X), label="y' by hand")
plt.plot(X, dy_dxs[:, 1], label="y'", linestyle="dotted", color='red', linewidth=4)
plt.legend()
plt.show()
Explanation: Second Order Ordinary Differential Equations:
The variable reassignment above becomes more clear for the following second order differential:
$ \frac{d^2y}{dx}+3 \frac{dy}{dx} + 2y = 0$
and
$y(0) = 0$
$y'(0) = 1$
For the solution, again reframe in terms of new variables:
$ y''+ 3y' + 2y = 0 $
or:
$ \omega' = -3\omega - 2\theta $
where:
$ \theta = y $
$ \omega = y' $
The $z$ variable is again employed, only this time as an array of increasing order:
$ z = [\theta, \omega] $
End of explanation
def dy2_dx(z, x):
return (z[1], -3*z[1] -2*z[0])
dy_dxs = odeint(dy2_dx, (0,1), X) # Outputs [y, y']
plt.plot(X, dy_dxs[:, 0], label="y")
plt.plot(X, dy_dxs[:, 1], label="y'")
plt.legend()
plt.show()
Explanation: Equivalently, $ z = [\theta, \omega] $ can be used as an array without referencing $ [\theta, \omega] $ explicitly, saving a couple of lines:
End of explanation
t = np.linspace(0, 10., 1000) # time grid
# Define the rates
k01 = 1; # S0 -> S1 rate
k10 = 1; # S1 -> S0 rate
kisc = 0.5; # S1 -> T1 rate
kT = 0.1; # T1 -> S0 rate
# Define model equations (see Munz et al. 2009)
def d(z, t):
S0 = z[0]
S1 = z[1]
T1 = z[2]
dS0_dt = -k01*S0 + k10*S1 + kT*T1
dS1_dt = k01*S0 -(kisc+k10)*S1 + 0*T1
dT1_dt = 0*S0 + kisc*S1 - kT*T1
return [dS0_dt, dS1_dt, dT1_dt]
# Initial conditions
S0_0 = 1. # initial S0 population
S1_0 = 0 # initial S1 population
T1_0 = 0 # initial T1 population
y0 = [S0_0, S1_0, T1_0] # initial conditions
# solve the coupled DEs
soln = odeint(d, y0, t)
S0 = soln[:, 0]
S1 = soln[:, 1]
T1 = soln[:, 2]
# Plot
plt.figure()
plt.plot(t, S0, label='S0 Population')
plt.plot(t, S1, label='S1 Population')
plt.plot(t, T1, label='T1 Population')
plt.xlabel('Time')
plt.ylabel('Normalized Population')
plt.title('Photophysical populations for three state molecules under excitation')
plt.legend(loc='best')
plt.show()
Explanation: Coupled Ordinary Differential Equations:
There are situations where a group of coupled equations dictate behavior. Here, a specific example is presented[1] that provides a numeric solution to the populations of three photophysical and electronic states of a fluorophore - a ground (S0), excited (S1), and triplet (T1) states. The three states are depicted in the figure below:
<div align="center">
<img src="files/images/09-01_three_state_diagram.png" width=40%>
</div>
Transitions between states are depicted by arrows and rates at which each of these transitions are represented as($k_{xx}$).
Alternatively, we can represent this system using a set of coupled ordinary differential equations:
$ \frac{d}{dt} \left( \begin{array}{c}
S_0(\bar{r},t) \
S_1(\bar{r},t) \
T_1(\bar{r},t) \ \end{array} \right)
= \left[ \begin{array}{ccc}
-k_{01}(\bar{r},t) & k_{10} & k_T \
k_{01}(\bar{r},t) & -(k_{ISC} + k_10) & 0 \
0 & k_{ISC} & -k_T \ \end{array} \right] \left( \begin{array}{c}
S_0(\bar{r},t) \
S_1(\bar{r},t) \
T_1(\bar{r},t) \ \end{array} \right) $
That naturally give rise to:
$[S_0(\bar{r},t) + S_1(\bar{r},t) + T_1(\bar{r},t)] = 1$, or 100% of the population.
Thus, the series of coupled ODEs can represent the population of each state as a function of time, given a starting population and a numerically defined set of rates.
[1] Monitoring Kinetics of Highly Environment Sensitive States of Fluorescent Molecules by Modulated Excitation and Time-Averaged Fluorescence Intensity Recording
Tor Sandén,Gustav Persson,Per Thyberg,Hans Blom, and Jerker Widengren*
Analytical Chemistry 2007 79 (9), 3330-3341
DOI: 10.1021/ac0622680
End of explanation |
713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
^ gor
Step1: Videti je, da zaporedje konvergira in sicer natanko k rešitvi enačbe. Opazimo tudi, da za vsako pravilno decimalko potrebujemo približno 3 korake rekurzije.
Konvergenca
Zdi se, da zaporedje približkov konvergira, vendar nekaj izračunanih členov ni dovolj, da bi bili povsem prepričani v konvergenco. Na srečo velja izrek, ki za rekurzivna zaporedja zagotavlja konvergenco
Step2: Odvod v rešitvi je približno -0.44, kar je po absolutni vrednosti manj od 1. To pomeni, da zaporedje približkov konvergira.
Hitrost konvergence
Ko iščemo rešitev enačbe z rekurzivnim zaporedjem, nas seveda zanima, koliko korakov je potrebnih za določeno število decimalk. To najlažje predstavimo z grafom napake v logaritemski skali.
Step3: Napaka pada podobno kot pri bisekciji. Za vsako pravilno decimalko potrebujemo približno 3 korake. | Python Code:
g = lambda x: 2**(-x)
xp = 1 # začetni približek
for i in range(15):
xp = g(xp)
print(xp)
print("Razlika med desno in levo stranjo enačbe je", xp-2**(-xp))
Explanation: ^ gor: Uvod
Reševanje enačb z navadno iteracijo
Pri rekurzivnih zaporedjih smo videli, da za zaporedje, ki zadošča rekurzivni formuli
$$x_{n+1}= g(x_n)$$
velja, da je je limita zaporedja $x_n$, vedno rešitev enačbe
$$x=g(x).$$
Pri tem smo predpostavili, da limita sploh obstaja in da je $g$ zvezna funkcija.
Trditev lahko obrnemo. Ničlo funkcije $f(x)$ lahko poiščemo z rekurzivnim zaporedjem približkov
$$x_{n+1} = g(x_n),$$
če enačbo
$$f(x) = 0$$
preoblikujemo v ekvivalentno enačbo oblike
$$ x = g(x).$$
Žal ne bo vsaka funkcija $g$ dobra, saj moramo poskrbeti, da zaporedje $x_n$ dejansko konvergira.
Primer
Reši enačbo
$$x=2^{-x}.$$
Rešitev
Rekurzivna formula se kar sama ponuja.
$$x_{n+1} = 2^{-x_n}.$$
Seveda nam nič ne zagotavlja, da bo zaporedje $x_n$ dejansko konvergentno. A poskusiti ni greh, pravijo.
End of explanation
import sympy as sym
from IPython.display import display
sym.init_printing(use_latex=True)
x = sym.Symbol('x')
dg = sym.diff(g(x),x)
print("Odvod iteracijske funkcije: ")
display(dg)
print("Odvod g(x) v rešitvi", dg.subs(x,xp).evalf())
Explanation: Videti je, da zaporedje konvergira in sicer natanko k rešitvi enačbe. Opazimo tudi, da za vsako pravilno decimalko potrebujemo približno 3 korake rekurzije.
Konvergenca
Zdi se, da zaporedje približkov konvergira, vendar nekaj izračunanih členov ni dovolj, da bi bili povsem prepričani v konvergenco. Na srečo velja izrek, ki za rekurzivna zaporedja zagotavlja konvergenco:
Izrek o konvergenci iteracije
Naj bo $x_n$ zaporedje podano z začetnim členom $x_0$ in rekurzivno formulo $x_{n+1}=g(x_n)$. Naj bo $x_p$ rešitev enačbe $x=g(x)$ in naj bo $|g'(x)|<1$ za vse $x\in[x_p-\varepsilon,x_p+\varepsilon]$. Če je $x_0\in [x_p-\varepsilon,x_p+\varepsilon]$, je zaporedje $x_n$ konvergentno in je limita enaka
$$\lim_{n\to\infty}x_n=x_p.$$
Izrek nam pove, da je konvergenca rekurzivnega zaporedja odvisna od velikosti odvoda iteracijske funkcije v rešitvi enačbe. Če je
$$|g'(x_p)|<1$$
bo za začetni približek, ki je dovolj blizu rešitve, zaporedje podano z rekurzivno formulo $x_{n+1}=g(x_n)$ konvergiralo k rešitvi.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
xp = sym.solve(sym.Eq(x,g(x)),x)[0].evalf() # točna rešitev
n = 30;
xz = [1] # zaporedje približkov
for i in range(n-1):
xz.append(g(xz[-1]))
napaka = [x - xp for x in xz] # zadnji približek vzamemo za točno rešitev
plt.semilogy(range(n),napaka,'o')
plt.title("Napaka pri računanju rešitve z navadno iteracijo")
Explanation: Odvod v rešitvi je približno -0.44, kar je po absolutni vrednosti manj od 1. To pomeni, da zaporedje približkov konvergira.
Hitrost konvergence
Ko iščemo rešitev enačbe z rekurzivnim zaporedjem, nas seveda zanima, koliko korakov je potrebnih za določeno število decimalk. To najlažje predstavimo z grafom napake v logaritemski skali.
End of explanation
import disqus
%reload_ext disqus
%disqus matpy
Explanation: Napaka pada podobno kot pri bisekciji. Za vsako pravilno decimalko potrebujemo približno 3 korake.
End of explanation |
714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Clean the data
First, import packages for data cleaning and read the data
Step2: Combine the train and test set for cleaning
Step3: Convert all ints to floats for XGBoost
Step4: Save lightly prepared data (no encoding)
Step5: Dealing with the NA values in the variables, some of them equal to 0 and some equal to median, based on the txt descriptions
Step6: Transforming Data
Use integers to encode categorical data.
Step7: Encode categorical data
Step8: Export data | Python Code:
from scipy.stats.mstats import mode
import pandas as pd
import numpy as np
import time
from sklearn.preprocessing import LabelEncoder
Read Data
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
target = train['SalePrice']
train = train.drop(['SalePrice'],axis=1)
trainlen = train.shape[0]
Explanation: Clean the data
First, import packages for data cleaning and read the data
End of explanation
df1 = train.head()
df2 = test.head()
pd.concat([df1, df2], axis=0, ignore_index=True)
alldata = pd.concat([train, test], axis=0, join='outer', ignore_index=True)
alldata = alldata.drop(['Id','Utilities'], axis=1)
alldata.dtypes
Explanation: Combine the train and test set for cleaning
End of explanation
alldata.ix[:,(alldata.dtypes=='int64') & (alldata.columns != 'MSSubClass')]=alldata.ix[:,(alldata.dtypes=='int64') & (alldata.columns!='MSSubClass')].astype('float64')
alldata.head(20)
Explanation: Convert all ints to floats for XGBoost
End of explanation
train = alldata.ix[0:trainlen-1, :]
test = alldata.ix[trainlen:alldata.shape[0],:]
test.to_csv('data/test_prepared_light.csv', index=False)
train.to_csv('data/train_prepared_light.csv', index=False)
Explanation: Save lightly prepared data (no encoding)
End of explanation
fMedlist=['LotFrontage']
fArealist=['MasVnrArea','TotalBsmtSF','BsmtFinSF1','BsmtFinSF2','BsmtUnfSF','BsmtFullBath', 'BsmtHalfBath','MasVnrArea','Fireplaces','GarageArea','GarageYrBlt','GarageCars']
for i in fArealist:
alldata.ix[pd.isnull(alldata.ix[:,i]),i] = 0
for i in fMedlist:
alldata.ix[pd.isnull(alldata.ix[:,i]),i] = np.nanmedian(alldata.ix[:,i])
Explanation: Dealing with the NA values in the variables, some of them equal to 0 and some equal to median, based on the txt descriptions
End of explanation
alldata.head(20)
Explanation: Transforming Data
Use integers to encode categorical data.
End of explanation
le = LabelEncoder()
nacount_category = np.array(alldata.columns[((alldata.dtypes=='int64') | (alldata.dtypes=='object')) & (pd.isnull(alldata).sum()>0)])
category = np.array(alldata.columns[((alldata.dtypes=='int64') | (alldata.dtypes=='object'))])
Bsmtset = set(['BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2'])
MasVnrset = set(['MasVnrType'])
Garageset = set(['GarageType','GarageYrBlt','GarageFinish','GarageQual','GarageCond'])
Fireplaceset = set(['FireplaceQu'])
Poolset = set(['PoolQC'])
NAset = set(['Fence','MiscFeature','Alley'])
# Put 0 and null values in the same category
for i in nacount_category:
if i in Bsmtset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['TotalBsmtSF']==0), i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]), i] = alldata.ix[:,i].value_counts().index[0]
elif i in MasVnrset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['MasVnrArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Garageset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['GarageArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Fireplaceset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['Fireplaces']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Poolset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['PoolArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in NAset:
alldata.ix[pd.isnull(alldata.ix[:,i]),i]='Empty'
else:
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
for i in category:
alldata.ix[:,i]=le.fit_transform(alldata.ix[:,i])
train = alldata.ix[0:trainlen-1, :]
test = alldata.ix[trainlen:alldata.shape[0],:]
alldata.head()
Explanation: Encode categorical data
End of explanation
train.to_csv('data/train_prepared.csv')
test.to_csv('data/test_prepared.csv')
train.head()
target.to_csv('data/train_target.csv', header='SalePrice', index=False)
Explanation: Export data
End of explanation |
715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
分类和逻辑回归 Classification and Logistic Regression
引入科学计算和绘图相关包:
Step1: 分类和回归的唯一区别在于,分类问题中我们希望预测的目标变量 $y$ 只会取少数几个离散值。本节我们将主要关注二元分类 binary classification问题,$y$ 在二元分类中只会取 $0, 1$ 两个值。$0$ 也被称为反类 negative class,$1$ 也被称为正类 positive class,它们有时也使用符号 $-, +$ 来标识。给定 $x^{(i)}$,对应的 $y^{(i)}$ 也被称为训练样本的标签 label。
本节包括以下内容:
逻辑回归 Logistic regression
感知器 The perceptron learning algorithm
牛顿法:最大化 $\ell(\theta)$ 的另一种算法 Newton's Method
Step2: 当 $z \rightarrow \infty$ 时,$g(z) \rightarrow 1$;当 $z \rightarrow -\infty$ 时,$g(z) \rightarrow 0$;$g(z)$ 的值域为 $(0, 1)$。我们保留令 $x_0 = 1$ 的习惯,$\theta^Tx = \theta_0 + \sum_{j=1}^n \theta_jx_j$。
之后在讲广义线性模型时,我们会介绍sigmoid函数的由来。暂时我们只是将其作为给定条件。sigmoid函数的导数有一个十分有用的性质,后续在推导时会用到:
$$
\begin{split}
g'(z) &= \frac{d}{dz}\frac{1}{1+e^{-z}} \
&= \frac{1}{(1+e^{-z})^2}e^{-z} \
&= \frac{1}{(1+e^{-z})} \cdot (1 - \frac{1}{(1+e^{-z})}) \
&= g(z) \cdot (1-g(z))
\end{split}
$$
在线性回归的概率诠释中,根据一定的假设,我们通过最大似然估计法计算 $\theta$。类似地,在逻辑回归中,我们也采用同样的策略,假设:
$$ P(y=1|x;\theta) = h_\theta(x) $$
$$ P(y=0|x;\theta) = 1 - h_\theta(x) $$
这两个假设可以合并为:
$$ P(y|x;\theta) = (h_\theta(x))^y(1-h_\theta(x))^{1-y} $$
继续假设 $m$ 个训练样本相互独立,似然函数因此可以写成:
$$
\begin{split}
L(\theta) & = p(y|X; \theta) \
& = \prod_{i=1}^{m} p(y^{(i)}|x^{(i)}; \theta) \
& = \prod_{i=1}^{m} (h_\theta(x^{(i)}))^{y^{(i)}}(1-h_\theta(x^{(i)}))^{1-y^{(i)}}
\end{split}
$$
相应的对数似然函数可以写成:
$$
\begin{split}
\ell(\theta) &= logL(\theta) \
&= \sum_{i=1}^m (y^{(i)}logh(x^{(i)})+(1-y^{(i)})log(1-h(x^{(i)})))
\end{split}
$$
为了求解对数似然函数的最大值,我们可以采用梯度上升的算法 $\theta = \theta + \alpha\nabla_\theta\ell(\theta)$,其中偏导为:
$$
\begin{split}
\frac{\partial}{\partial\theta_j}\ell(\theta) &= \sum(y\frac{1}{g(\theta^Tx)} - (1-y)\frac{1}{1-g(\theta^Tx)})\frac{\partial}{\partial\theta_j}g(\theta^Tx) \
&= \sum(y\frac{1}{g(\theta^Tx)} - (1-y)\frac{1}{1-g(\theta^Tx)})g(\theta^Tx)(1-g(\theta^Tx))\frac{\partial}{\partial\theta_j}\theta^Tx \
&= \sum(y(1-g(\theta^Tx))-(1-y)g(\theta^Tx))x_j \
&= \sum(y-h_\theta(x))x_j
\end{split}
$$
而对于每次迭代只使用单个样本的随机梯度上升算法而言 | Python Code:
import numpy as np
from sklearn import linear_model, datasets
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
Explanation: 分类和逻辑回归 Classification and Logistic Regression
引入科学计算和绘图相关包:
End of explanation
x = np.arange(-10., 10., 0.2)
y = 1 / (1 + np.e ** (-x))
plt.plot(x, y)
plt.title(' Logistic Function ')
plt.show()
Explanation: 分类和回归的唯一区别在于,分类问题中我们希望预测的目标变量 $y$ 只会取少数几个离散值。本节我们将主要关注二元分类 binary classification问题,$y$ 在二元分类中只会取 $0, 1$ 两个值。$0$ 也被称为反类 negative class,$1$ 也被称为正类 positive class,它们有时也使用符号 $-, +$ 来标识。给定 $x^{(i)}$,对应的 $y^{(i)}$ 也被称为训练样本的标签 label。
本节包括以下内容:
逻辑回归 Logistic regression
感知器 The perceptron learning algorithm
牛顿法:最大化 $\ell(\theta)$ 的另一种算法 Newton's Method: Another algorithm for maximizing $\ell(\theta)$
1. 逻辑回归 Logistic Regression
在逻辑回归中,我们的假设函数 $h_\theta(x)$ 的形式为:
$$ h_\theta(x) = g(\theta^Tx) = \frac{1}{1+e^{-\theta^Tx}}, $$
其中
$$ g(z) = \frac{1}{1+e^{-z}} $$
被称为logistic函数或sigmoid函数,$g(z)$ 的形式如下:
End of explanation
f = lambda x: x ** 2
f_prime = lambda x: 2 * x
improve_x = lambda x: x - f(x) / f_prime(x)
x = np.arange(0, 3, 0.2)
x0 = 2
tangent0 = lambda x: f_prime(x0) * (x - x0) + f(x0)
x1 = improve_x(x0)
tangent1 = lambda x: f_prime(x1) * (x - x1) + f(x1)
plt.plot(x, f(x), label="y=x^2")
plt.plot(x, np.zeros_like(x), label="x axis")
plt.plot(x, tangent0(x), label="y=4x-4")
plt.plot(x, tangent1(x), label="y=2x-1")
plt.legend(loc="best")
plt.show()
Explanation: 当 $z \rightarrow \infty$ 时,$g(z) \rightarrow 1$;当 $z \rightarrow -\infty$ 时,$g(z) \rightarrow 0$;$g(z)$ 的值域为 $(0, 1)$。我们保留令 $x_0 = 1$ 的习惯,$\theta^Tx = \theta_0 + \sum_{j=1}^n \theta_jx_j$。
之后在讲广义线性模型时,我们会介绍sigmoid函数的由来。暂时我们只是将其作为给定条件。sigmoid函数的导数有一个十分有用的性质,后续在推导时会用到:
$$
\begin{split}
g'(z) &= \frac{d}{dz}\frac{1}{1+e^{-z}} \
&= \frac{1}{(1+e^{-z})^2}e^{-z} \
&= \frac{1}{(1+e^{-z})} \cdot (1 - \frac{1}{(1+e^{-z})}) \
&= g(z) \cdot (1-g(z))
\end{split}
$$
在线性回归的概率诠释中,根据一定的假设,我们通过最大似然估计法计算 $\theta$。类似地,在逻辑回归中,我们也采用同样的策略,假设:
$$ P(y=1|x;\theta) = h_\theta(x) $$
$$ P(y=0|x;\theta) = 1 - h_\theta(x) $$
这两个假设可以合并为:
$$ P(y|x;\theta) = (h_\theta(x))^y(1-h_\theta(x))^{1-y} $$
继续假设 $m$ 个训练样本相互独立,似然函数因此可以写成:
$$
\begin{split}
L(\theta) & = p(y|X; \theta) \
& = \prod_{i=1}^{m} p(y^{(i)}|x^{(i)}; \theta) \
& = \prod_{i=1}^{m} (h_\theta(x^{(i)}))^{y^{(i)}}(1-h_\theta(x^{(i)}))^{1-y^{(i)}}
\end{split}
$$
相应的对数似然函数可以写成:
$$
\begin{split}
\ell(\theta) &= logL(\theta) \
&= \sum_{i=1}^m (y^{(i)}logh(x^{(i)})+(1-y^{(i)})log(1-h(x^{(i)})))
\end{split}
$$
为了求解对数似然函数的最大值,我们可以采用梯度上升的算法 $\theta = \theta + \alpha\nabla_\theta\ell(\theta)$,其中偏导为:
$$
\begin{split}
\frac{\partial}{\partial\theta_j}\ell(\theta) &= \sum(y\frac{1}{g(\theta^Tx)} - (1-y)\frac{1}{1-g(\theta^Tx)})\frac{\partial}{\partial\theta_j}g(\theta^Tx) \
&= \sum(y\frac{1}{g(\theta^Tx)} - (1-y)\frac{1}{1-g(\theta^Tx)})g(\theta^Tx)(1-g(\theta^Tx))\frac{\partial}{\partial\theta_j}\theta^Tx \
&= \sum(y(1-g(\theta^Tx))-(1-y)g(\theta^Tx))x_j \
&= \sum(y-h_\theta(x))x_j
\end{split}
$$
而对于每次迭代只使用单个样本的随机梯度上升算法而言:
$$ \theta_j = \theta_j + \alpha(y^{(i)}-h_\theta(x^{(i)}))x^{(i)}j = \theta_j - \alpha(h\theta(x^{(i)}) - y^{(i)})x_j^{(i)}$$
可以看到,除去假设函数 $h_\theta(x)$ 本身不同之外,逻辑回归和线性回归的梯度更新是十分类似的。广义线性模型将会解释这里的“巧合”。
2. 感知器 The Perceptron Learning Algorithm
在逻辑回归中,我们通过sigmoid函数,使得最终的目标变量落在 $(0, 1)$ 的区间内,并假设目标变量的值就是其为正类的概率。
设想我们使目前变量严格地取 $0$ 或 $1$:
$$ g(z) =\left{
\begin{aligned}
1 & , z \geq 0 \
0 & , z < 0
\end{aligned}
\right.
$$
和之前一样,我们令 $h_\theta(x) = g(\theta^Tx)$,并根据以下规则更新:
$$ \theta_j = \theta_j + \alpha(y^{(i)} - h_\theta(x^{(i)}))x_j^{(i)} $$
这个算法被称为感知器学习算法 perceptron learning algorithm。
感知器在上世纪60年代一直被视作大脑中单个神经元的粗略模拟。但注意,尽管感知器和逻辑回归的形式非常相似,但由于 $g(z)$ 无法使用概率假设来描述,因而也无法使用最大似然估计法进行参数估计。实际上感知器和线性模型是完全不同的算法类型,它是神经网络算法的起源,之后我们会回到这个话题。
3. 牛顿法:最大化 $\ell(\theta)$ 的另一种算法 Newton's Method: Another algorithm for maximizing $\ell(\theta)$
回到逻辑回归,为了求解其对数似然函数的最大值,除了梯度上升算法外,这里介绍通过牛顿法进行求解。
牛顿法主要用来求解方程的根。设想有一个函数 $f: \mathbb{R} \rightarrow \mathbb{R}$,我们希望找到一个值 $\theta$ 使得 $f(\theta)=0$。牛顿法的迭代过程如下:
$$ \theta = \theta - \frac{f(\theta)}{f'(\theta)} $$
End of explanation |
716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Step1: Case Study
Step2: Check Correlation Matrix
Before developing your ML model, you need to select features. To find informative features, check the correlation matrix by running the following cell. Which features are informative?
Step4: alcohol is most highly correlated with quality. Looking for other informative features, notice that volatile acidity correlates with quality but not with alcohol, making it a good second feature. Remember that a correlation matrix is not helpful if predictive signals are encoded in combinations of features.
Validate Input Data against Data Schema
Before processing your data, you should validate the data against a data schema as described in Data and Feature Debugging.
First, define a function that validates data against a schema.
Step5: To define your schema, you need to understand the statistical properties of your dataset. Generate statistics on your dataset by running the following code cell
Step6: Using the statistics generated above, define the data schema in the following code cell. For demonstration purposes, restrict your data schema to the first three data columns. For each data column, enter the
Step7: Solution
Step8: Split and Normalize Data
Split the dataset into data and labels.
Step9: Normalize data using z-score.
Step10: Test Engineered Data
After normalizing your data, you should test your engineered data for errors as described in Data and Feature Debugging. In this section, you will test that engineering data
Step11: Your input data had 6497 examples and 11 feature columns. Test whether your engineered data has the expected number of rows and columns by running the following cell. Confirm that the test fails if you change the values below.
Step12: Test that your engineered data does not contain nulls by running the code below.
Step13: Check Splits for Statistical Equivalence
As described in the Data Debugging guidelines, before developing your model, you should check that your training and validation splits are equally representative. Assuming a training
Step14: The two splits are clearly not equally representative. To make the splits equally representative, you can shuffle the data.
Run the following code cell to shuffle the data, and then recreate the features and labels from the shuffled data.
Step15: Now, confirm that the splits are equally representative by regenerating and comparing the statistics using the previous code cells. You may wonder why the initial splits differed so greatly. It turns out that in the wine dataset, the first 4897 rows contain data on white wines and the next 1599 rows contain data on red wines. When you split your dataset 80
Step17: Linear Model
Following good ML dev practice, let's start with a linear model that uses the most informative feature from the correlation matrix
Step18: For fast prototyping, let's try using a full batch per epoch to update the gradient only once per epoch. Use the full batch by setting batch_size = wineFeatures.shape[0] as indicated by the code comment.
What do you think of the loss curve? Can you improve it? For hints and discussion, see the following text cells.
Step19: Hint
The loss decreases but very slowly. Possible fixes are
Step20: Add Feature to Linear Model
Try adding a feature to the linear model. Since you need to combine the two features into one prediction for regression, you'll also need to add a second layer. Modify the code below to implement the following changes
Step21: Solution
Run the following code to add the second feature and the second layer. The training loss is about 0.59, a small decrease from the previous loss of 0.64.
Step22: Use a Nonlinear Model
Let's try a nonlinear model. Modify the code below to make the following changes
Step23: Solution
Run the following cell to use a relu activation in your first hidden layer. Your loss stays about the same, perhaps declining negligibly to 0.58.
Step24: Optimize Your Model
We have two features with one hidden layer but didn't see an improvement. At this point, it's tempting to use all your features with a high-capacity network. However, you must resist the temptation. Instead, follow the guidance in Model Optimization to improve model performance. For a hint and for a discussion, see the following text sections.
Step25: Hint
You can try to reduce loss by adding features, adding layers, or playing with the hyperparameters. Before adding more features, check the correlation matrix. Don't expect your loss to decrease by much. Sadly, that is a common experience in machine learning!
Solution
Run the following code to
Step26: Check for Implementation Bugs using Reduced Dataset
Your loss isn't decreasing by much. Perhaps your model has an implementation bug. From the Model Debugging guidelines, a quick test for implementation bugs is to obtain a low loss on a reduced dataset of, say, 10 examples. Remember, passing this test does not validate your modeling approach but only checks for basic implementation bugs. In your ML problem, if your model passes this test, then continue debugging your model to train on your full dataset.
In the following code, experiment with the learning rate, batch size, and number of epochs. Can you reach a low loss? Choose hyperparameter values that let you iterate quickly.
Step27: Solution
Run the following code cell to train the model using these hyperparameter values
Step28: Trying a Very Complex Model
Let's go all in and use a very complex model with all the features. For science! And to satisfy ourselves that a simple model is indeed better. Let's use all 11 features with 3 fully-connected relu layers and a final linear layer. The next cell takes a while to run. Skip to the results in the cell after if you like. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
End of explanation
# Reset environment for a new run
% reset -f
# Load libraries
from os.path import join # for joining file pathnames
import pandas as pd
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
# Set Pandas display options
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
wineDf = pd.read_csv(
"https://download.mlcc.google.com/mledu-datasets/winequality.csv",
encoding='latin-1')
wineDf.columns = ['fixed acidity','volatile acidity','citric acid',
'residual sugar','chlorides','free sulfur dioxide',
'total sulfur dioxide','density','pH',
'sulphates','alcohol','quality']
wineDf.head()
Explanation: Case Study: Debugging in Regression
In this Colab, you will learn how to debug a regression problem through a case study. You will:
Set up the problem.
Interpret the correlation matrix.
Implement linear and nonlinear models.
Compare and choose between linear and nonlinear models.
Optimize your chosen model.
Debug your chosen model.
Setup
This Colab uses the wine quality dataset<sup>[1]</sup>, which is hosted at UCI. This dataset contains data on the physicochemical properties of wine along with wine quality ratings. The problem is to predict wine quality (0-10) from physicochemical properties.
Please make a copy of this Colab before running it. Click on File, and then click on Save a copy in Drive.
<small>[1] Modeling wine preferences by data mining from physicochemical properties. P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Decision Support Systems, Elsevier, 47(4):547-553, 2009.</small>
Load libraries and data by running the next cell. Display the first few rows to verify that the dataset loaded correctly.
End of explanation
corr_wineDf = wineDf.corr()
plt.figure(figsize=(16,10))
sns.heatmap(corr_wineDf, annot=True)
Explanation: Check Correlation Matrix
Before developing your ML model, you need to select features. To find informative features, check the correlation matrix by running the following cell. Which features are informative?
End of explanation
#@title Define function to validate data
def test_data_schema(input_data, schema):
Tests that the datatypes and ranges of values in the dataset
adhere to expectations.
Args:
input_function: Dataframe containing data to test
schema: Schema which describes the properties of the data.
def test_dtypes():
for column in schema.keys():
assert input_data[column].map(type).eq(
schema[column]['dtype']).all(), (
"Incorrect dtype in column '%s'." % column
)
print('Input dtypes are correct.')
def test_ranges():
for column in schema.keys():
schema_max = schema[column]['range']['max']
schema_min = schema[column]['range']['min']
# Assert that data falls between schema min and max.
assert input_data[column].max() <= schema_max, (
"Maximum value of column '%s' is too low." % column
)
assert input_data[column].min() >= schema_min, (
"Minimum value of column '%s' is too high." % column
)
print('Data falls within specified ranges.')
test_dtypes()
test_ranges()
Explanation: alcohol is most highly correlated with quality. Looking for other informative features, notice that volatile acidity correlates with quality but not with alcohol, making it a good second feature. Remember that a correlation matrix is not helpful if predictive signals are encoded in combinations of features.
Validate Input Data against Data Schema
Before processing your data, you should validate the data against a data schema as described in Data and Feature Debugging.
First, define a function that validates data against a schema.
End of explanation
wineDf.describe()
Explanation: To define your schema, you need to understand the statistical properties of your dataset. Generate statistics on your dataset by running the following code cell:
End of explanation
wine_schema = {
'fixed acidity': {
'range': {
'min': 3.8,
'max': 15.9
},
'dtype': float,
},
'volatile acidity': {
'range': {
'min': , # describe() rounds up this value, be careful
'max':
},
'dtype': ,
},
'citric acid': {
'range': {
'min': ,
'max':
},
'dtype': ,
}
}
print('Validating wine data against data schema...')
test_data_schema(wineDf, wine_schema)
Explanation: Using the statistics generated above, define the data schema in the following code cell. For demonstration purposes, restrict your data schema to the first three data columns. For each data column, enter the:
minimum value
maximum value
data type
As an example, the values for the first column are filled out. After entering the values, run the code cell to confirm that your input data matches the schema.
End of explanation
wine_schema = {
'fixed acidity': {
'range': {
'min': 3.7,
'max': 15.9
},
'dtype': float,
},
'volatile acidity': {
'range': {
'min': 0.08, # minimum value
'max': 1.6 # maximum value
},
'dtype': float, # data type
},
'citric acid': {
'range': {
'min': 0.0, # minimum value
'max': 1.7 # maximum value
},
'dtype': float, # data type
}
}
print('Validating wine data against data schema...')
test_data_schema(wineDf, wine_schema)
Explanation: Solution
End of explanation
wineFeatures = wineDf.copy(deep=True)
wineFeatures.drop(columns='quality',inplace=True)
wineLabels = wineDf['quality'].copy(deep=True)
Explanation: Split and Normalize Data
Split the dataset into data and labels.
End of explanation
def normalizeData(arr):
stdArr = np.std(arr)
meanArr = np.mean(arr)
arr = (arr-meanArr)/stdArr
return arr
for str1 in wineFeatures.columns:
wineFeatures[str1] = normalizeData(wineFeatures[str1])
Explanation: Normalize data using z-score.
End of explanation
import unittest
def test_input_dim(df, n_rows, n_columns):
assert len(df) == n_rows, "Unexpected number of rows."
assert len(df.columns) == n_columns, "Unexpected number of columns."
print('Engineered data has the expected number of rows and columns.')
def test_nulls(df):
dataNulls = df.isnull().sum().sum()
assert dataNulls == 0, "Nulls in engineered data."
print('Engineered features do not contain nulls.')
Explanation: Test Engineered Data
After normalizing your data, you should test your engineered data for errors as described in Data and Feature Debugging. In this section, you will test that engineering data:
Has the expected number of rows and columns.
Does not have null values.
First, set up the testing functions by running the following code cell:
End of explanation
#@title Test dimensions of engineered data
wine_feature_rows = 6497 #@param
wine_feature_cols = 11 #@param
test_input_dim(wineFeatures,
wine_feature_rows,
wine_feature_cols)
Explanation: Your input data had 6497 examples and 11 feature columns. Test whether your engineered data has the expected number of rows and columns by running the following cell. Confirm that the test fails if you change the values below.
End of explanation
test_nulls(wineFeatures)
Explanation: Test that your engineered data does not contain nulls by running the code below.
End of explanation
splitIdx = wineFeatures.shape[0]*8/10
wineFeatures.iloc[0:splitIdx,:].describe()
wineFeatures.iloc[splitIdx:-1,:].describe()
Explanation: Check Splits for Statistical Equivalence
As described in the Data Debugging guidelines, before developing your model, you should check that your training and validation splits are equally representative. Assuming a training:validation split of 80:20, compare the mean and the standard deviation of the splits by running the next two code cells. Note that this comparison is not a rigorous test for statistical equivalence but simply a quick and dirty comparison of the splits.
End of explanation
# Shuffle data
wineDf = wineDf.sample(frac=1).reset_index(drop=True)
# Recreate features and labels
wineFeatures = wineDf.copy(deep=True)
wineFeatures.drop(columns='quality',inplace=True)
wineLabels = wineDf['quality'].copy(deep=True)
Explanation: The two splits are clearly not equally representative. To make the splits equally representative, you can shuffle the data.
Run the following code cell to shuffle the data, and then recreate the features and labels from the shuffled data.
End of explanation
baselineMSE = np.square(wineLabels[0:splitIdx]-np.mean(wineLabels[0:splitIdx]))
baselineMSE = np.sum(baselineMSE)/len(baselineMSE)
print(baselineMSE)
Explanation: Now, confirm that the splits are equally representative by regenerating and comparing the statistics using the previous code cells. You may wonder why the initial splits differed so greatly. It turns out that in the wine dataset, the first 4897 rows contain data on white wines and the next 1599 rows contain data on red wines. When you split your dataset 80:20, then your training dataset contains 5197 examples, which is 94% white wine. The validation dataset is purely red wine.
Ensuring your splits are statistically equivalent is a good development practice. In general, following good development practices will simplify your model debugging. To learn about testing for statistical equivalence, see Equivalence Tests Lakens, D..
Establish a Baseline
For a regression problem, the simplest baseline to predict the average value. Run the following code to calculate the mean-squared error (MSE) loss on the training split using the average value as a baseline. Your loss is approximately 0.75. Any model should beat this loss to justify its use.
End of explanation
def showRegressionResults(trainHistory):
Function to:
* Print final loss.
* Plot loss curves.
Args:
trainHistory: object returned by model.fit
# Print final loss
print("Final training loss: " + str(trainHistory.history['loss'][-1]))
print("Final Validation loss: " + str(trainHistory.history['val_loss'][-1]))
# Plot loss curves
plt.plot(trainHistory.history['loss'])
plt.plot(trainHistory.history['val_loss'])
plt.legend(['Training loss','Validation loss'],loc='best')
plt.title('Loss Curves')
Explanation: Linear Model
Following good ML dev practice, let's start with a linear model that uses the most informative feature from the correlation matrix: alcohol. Even if this model performs badly, we can still use it as a baseline. This model should beat our previous baseline's MSE of 0.75.
First, let's define a function to plot our loss and accuracy curves. The function will also print the final loss and accuracy. Instead of using verbose=1, you can call the function.
End of explanation
model = None
# Choose feature
wineFeaturesSimple = wineFeatures['alcohol']
# Define model
model = keras.Sequential()
model.add(keras.layers.Dense(units=1, activation='linear', input_dim=1))
# Specify the optimizer using the TF API to specify the learning rate
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01),
loss='mse')
# Train the model!
trainHistory = model.fit(wineFeaturesSimple,
wineLabels,
epochs=50,
batch_size=, # set batch size here
validation_split=0.2,
verbose=0)
# Plot
showRegressionResults(trainHistory)
Explanation: For fast prototyping, let's try using a full batch per epoch to update the gradient only once per epoch. Use the full batch by setting batch_size = wineFeatures.shape[0] as indicated by the code comment.
What do you think of the loss curve? Can you improve it? For hints and discussion, see the following text cells.
End of explanation
model = None
# Choose feature
wineFeaturesSimple = wineFeatures['alcohol']
# Define model
model = keras.Sequential()
model.add(keras.layers.Dense(units=1, activation='linear', input_dim=1))
# Specify the optimizer using the TF API to specify the learning rate
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01),
loss='mse')
# Train the model!
trainHistory = model.fit(wineFeaturesSimple,
wineLabels,
epochs=20,
batch_size=100, # set batch size here
validation_split=0.2,
verbose=0)
# Plot
showRegressionResults(trainHistory)
Explanation: Hint
The loss decreases but very slowly. Possible fixes are:
Increase number of epochs.
Increase learning rate.
Decrease batch size. A lower batch size can result in larger decrease in loss per epoch, under the assumption that the smaller batches stay representative of the overall data distribution.
Play with these three parameters in the code above to decrease the loss.
Solution
Run the following code cell to train the model using a reduced batch size of 100. Reducing the batch size leads to a greater decrease in loss per epoch. The minimum achievable loss is about 0.64. This is a significant increase over our baseline of 0.75.
End of explanation
model = None
# Select features
wineFeaturesSimple = wineFeatures[['alcohol', '...']] # add 'volatile acidity'
# Define model
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSimple.shape[1],
input_dim=wineFeaturesSimple.shape[1],
activation='linear'))
model.add(...) # add second layer
# Compile
model.compile(optimizer=tf.optimizers.Adam(learning_rate=), loss='mse')
# Train
trainHistory = model.fit(wineFeaturesSimple,
wineLabels,
epochs=,
batch_size=,
validation_split=0.2,
verbose=0)
# Plot results
showRegressionResults(trainHistory)
Explanation: Add Feature to Linear Model
Try adding a feature to the linear model. Since you need to combine the two features into one prediction for regression, you'll also need to add a second layer. Modify the code below to implement the following changes:
Add 'volatile acidity' to the features in wineFeaturesSimple.
Add a second linear layer with 1 unit.
Experiment with learning rate, epochs, and batch_size to try to reduce loss.
What happens to your loss?
End of explanation
model = None
# Select features
wineFeaturesSimple = wineFeatures[['alcohol', 'volatile acidity']] # add second feature
# Define model
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSimple.shape[1],
input_dim=wineFeaturesSimple.shape[1],
activation='linear'))
model.add(keras.layers.Dense(1, activation='linear')) # add second layer
# Compile
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss='mse')
# Train
trainHistory = model.fit(wineFeaturesSimple,
wineLabels,
epochs=20,
batch_size=100,
validation_split=0.2,
verbose=0)
# Plot results
showRegressionResults(trainHistory)
Explanation: Solution
Run the following code to add the second feature and the second layer. The training loss is about 0.59, a small decrease from the previous loss of 0.64.
End of explanation
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSimple.shape[1],
input_dim=wineFeaturesSimple.shape[1],
activation=))
model.add(keras.layers.Dense(1, activation='linear'))
# Compile
model.compile(optimizer=tf.optimizers.Adam(), loss='mse')
# Fit
model.fit(wineFeaturesSimple,
wineLabels,
epochs=,
batch_size=,
validation_split=0.2,
verbose=0)
# Plot results
showRegressionResults(trainHistory)
Explanation: Use a Nonlinear Model
Let's try a nonlinear model. Modify the code below to make the following changes:
Change the first layer to use relu. (Output layer stays linear since this is a regression problem.)
As usual, specify the learning rate, epochs, and batch_size.
Run the cell. Does the loss increase, decrease, or stay the same?
End of explanation
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSimple.shape[1],
input_dim=wineFeaturesSimple.shape[1],
activation='relu'))
model.add(keras.layers.Dense(1, activation='linear'))
# Compile
model.compile(optimizer=tf.optimizers.Adam(), loss='mse')
# Fit
model.fit(wineFeaturesSimple,
wineLabels,
epochs=20,
batch_size=100,
validation_split=0.2,
verbose=0)
# Plot results
showRegressionResults(trainHistory)
Explanation: Solution
Run the following cell to use a relu activation in your first hidden layer. Your loss stays about the same, perhaps declining negligibly to 0.58.
End of explanation
# Choose features
wineFeaturesSimple = wineFeatures[['alcohol', 'volatile acidity']] # add features
# Define
model = None
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSimple.shape[1],
activation='relu',
input_dim=wineFeaturesSimple.shape[1]))
# Add more layers here
model.add(keras.layers.Dense(1,activation='linear'))
# Compile
model.compile(optimizer=tf.optimizers.Adam(), loss='mse')
# Train
trainHistory = model.fit(wineFeaturesSimple,
wineLabels,
epochs=,
batch_size=,
validation_split=0.2,
verbose=0)
# Plot results
showRegressionResults(trainHistory)
Explanation: Optimize Your Model
We have two features with one hidden layer but didn't see an improvement. At this point, it's tempting to use all your features with a high-capacity network. However, you must resist the temptation. Instead, follow the guidance in Model Optimization to improve model performance. For a hint and for a discussion, see the following text sections.
End of explanation
# Choose features
wineFeaturesSimple = wineFeatures[['alcohol','volatile acidity','chlorides','density']]
# Define
model = None
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSimple.shape[1],
activation='relu',
input_dim=wineFeaturesSimple.shape[1]))
# Add more layers here
model.add(keras.layers.Dense(1,activation='linear'))
# Compile
model.compile(optimizer=tf.optimizers.Adam(), loss='mse')
# Train
trainHistory = model.fit(wineFeaturesSimple,
wineLabels,
epochs=200,
batch_size=100,
validation_split=0.2,
verbose=0)
# Plot results
showRegressionResults(trainHistory)
Explanation: Hint
You can try to reduce loss by adding features, adding layers, or playing with the hyperparameters. Before adding more features, check the correlation matrix. Don't expect your loss to decrease by much. Sadly, that is a common experience in machine learning!
Solution
Run the following code to:
Add the features chlorides and density.
Set training epochs to 100.
Set batch size to 100.
Your loss reduces to about 0.56. That's a minor improvement over the previous loss of 0.58. It seems that adding more features or capacity isn't improving your model by much. Perhaps your model has a bug? In the next section, you will run a sanity check on your model.
End of explanation
# Choose 10 examples
wineFeaturesSmall = wineFeatures[0:10]
wineLabelsSmall = wineLabels[0:10]
# Define model
model = None
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSmall.shape[1],
activation='relu',
input_dim=wineFeaturesSmall.shape[1]))
model.add(keras.layers.Dense(wineFeaturesSmall.shape[1], activation='relu'))
model.add(keras.layers.Dense(1, activation='linear'))
# Compile
model.compile(optimizer=tf.optimizers.Adam(), loss='mse') # set LR
# Train
trainHistory = model.fit(wineFeaturesSmall,
wineLabelsSmall,
epochs=,
batch_size=,
verbose=0)
# Plot results
print("Final training loss: " + str(trainHistory.history['loss'][-1]))
plt.plot(trainHistory.history['loss'])
Explanation: Check for Implementation Bugs using Reduced Dataset
Your loss isn't decreasing by much. Perhaps your model has an implementation bug. From the Model Debugging guidelines, a quick test for implementation bugs is to obtain a low loss on a reduced dataset of, say, 10 examples. Remember, passing this test does not validate your modeling approach but only checks for basic implementation bugs. In your ML problem, if your model passes this test, then continue debugging your model to train on your full dataset.
In the following code, experiment with the learning rate, batch size, and number of epochs. Can you reach a low loss? Choose hyperparameter values that let you iterate quickly.
End of explanation
# Choose 10 examples
wineFeaturesSmall = wineFeatures[0:10]
wineLabelsSmall = wineLabels[0:10]
# Define model
model = None
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeaturesSmall.shape[1], activation='relu',
input_dim=wineFeaturesSmall.shape[1]))
model.add(keras.layers.Dense(wineFeaturesSmall.shape[1], activation='relu'))
model.add(keras.layers.Dense(1, activation='linear'))
# Compile
model.compile(optimizer=tf.optimizers.Adam(0.01), loss='mse') # set LR
# Train
trainHistory = model.fit(wineFeaturesSmall,
wineLabelsSmall,
epochs=200,
batch_size=10,
verbose=0)
# Plot results
print("Final training loss: " + str(trainHistory.history['loss'][-1]))
plt.plot(trainHistory.history['loss'])
Explanation: Solution
Run the following code cell to train the model using these hyperparameter values:
learning rate = 0.01
epochs = 200
batch size = 10
You get a low loss on your reduced dataset. This result means your model is probably solid and your previous results are as good as they'll get.
End of explanation
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(wineFeatures.shape[1], activation='relu',
input_dim=wineFeatures.shape[1]))
model.add(keras.layers.Dense(wineFeatures.shape[1], activation='relu'))
model.add(keras.layers.Dense(wineFeatures.shape[1], activation='relu'))
model.add(keras.layers.Dense(1,activation='linear'))
# Compile
model.compile(optimizer=tf.optimizers.Adam(), loss='mse')
# Train the model!
trainHistory = model.fit(wineFeatures, wineLabels, epochs=100, batch_size=100,
verbose=1, validation_split = 0.2)
# Plot results
showRegressionResults(trainHistory)
plt.ylim(0.4,1)
Explanation: Trying a Very Complex Model
Let's go all in and use a very complex model with all the features. For science! And to satisfy ourselves that a simple model is indeed better. Let's use all 11 features with 3 fully-connected relu layers and a final linear layer. The next cell takes a while to run. Skip to the results in the cell after if you like.
End of explanation |
717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Previous
1.17 从字典中提取子集
问题
你想构造一个字典,它是另外一个字典的子集。
解决方案
最简单的方式是使用字典推导。比如:
Step1: 讨论
大多数情况下字典推导能做到的,通过创建一个元组序列然后把它传给 dict() 函数也能实现。比如:
Step2: 但是,字典推导方式表意更清晰,并且实际上也会运行的更快些 (在这个例子中,实际测试几乎比 dcit() 函数方式快整整一倍)。
有时候完成同一件事会有多种方式。比如,第二个例子程序也可以像这样重写: | Python Code:
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
# Make a dictionary of all prices over 200
p1 = {key: value for key, value in prices.items() if value > 200}
p1
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p2 = {key: value for key, value in prices.items() if key in tech_names}
p2
Explanation: Previous
1.17 从字典中提取子集
问题
你想构造一个字典,它是另外一个字典的子集。
解决方案
最简单的方式是使用字典推导。比如:
End of explanation
p1 = dict((key, value) for key, value in prices.items() if value > 200)
p1
Explanation: 讨论
大多数情况下字典推导能做到的,通过创建一个元组序列然后把它传给 dict() 函数也能实现。比如:
End of explanation
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p2 = {key:prices[key] for key in prices.keys() & tech_names}
p2
Explanation: 但是,字典推导方式表意更清晰,并且实际上也会运行的更快些 (在这个例子中,实际测试几乎比 dcit() 函数方式快整整一倍)。
有时候完成同一件事会有多种方式。比如,第二个例子程序也可以像这样重写:
End of explanation |
718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step3: 任意画風の高速画風変換
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step4: 使用する画像を取得しましょう。
Step5: TF Hub モジュールをインポートする
Step6: 画風に使用する Hub モジュールのシグネチャは、次のとおりです。
outputs = hub_module(content_image, style_image)
stylized_image = outputs[0]
上記の content_image、style_image、および stylized_image は、形状 [batch_size, image_height, image_width, 3] の 4-D テンソルです。
現在の例では 1 つの画像のみを提供するためバッチの次元は 1 ですが、同じモジュールを使用して、同時に複数の画像を処理することができます。
画像の入力と出力の値範囲は [0, 1] です。
コンテンツとスタイル画像の形状が一致する必要はありません。出力画像の形状はコンテンツ画像の形状と同一です。
画風の実演
Step7: 複数の画像で試してみる | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import functools
import os
from matplotlib import gridspec
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("TF Version: ", tf.__version__)
print("TF Hub version: ", hub.__version__)
print("Eager mode enabled: ", tf.executing_eagerly())
print("GPU available: ", tf.config.list_physical_devices('GPU'))
# @title Define image loading and visualization functions { display-mode: "form" }
def crop_center(image):
Returns a cropped square image.
shape = image.shape
new_shape = min(shape[1], shape[2])
offset_y = max(shape[1] - shape[2], 0) // 2
offset_x = max(shape[2] - shape[1], 0) // 2
image = tf.image.crop_to_bounding_box(
image, offset_y, offset_x, new_shape, new_shape)
return image
@functools.lru_cache(maxsize=None)
def load_image(image_url, image_size=(256, 256), preserve_aspect_ratio=True):
Loads and preprocesses images.
# Cache image file locally.
image_path = tf.keras.utils.get_file(os.path.basename(image_url)[-128:], image_url)
# Load and convert to float32 numpy array, add batch dimension, and normalize to range [0, 1].
img = tf.io.decode_image(
tf.io.read_file(image_path),
channels=3, dtype=tf.float32)[tf.newaxis, ...]
img = crop_center(img)
img = tf.image.resize(img, image_size, preserve_aspect_ratio=True)
return img
def show_n(images, titles=('',)):
n = len(images)
image_sizes = [image.shape[1] for image in images]
w = (image_sizes[0] * 6) // 320
plt.figure(figsize=(w * n, w))
gs = gridspec.GridSpec(1, n, width_ratios=image_sizes)
for i in range(n):
plt.subplot(gs[i])
plt.imshow(images[i][0], aspect='equal')
plt.axis('off')
plt.title(titles[i] if len(titles) > i else '')
plt.show()
Explanation: 任意画風の高速画風変換
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_arbitrary_image_stylization.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_arbitrary_image_stylization.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/tf2_arbitrary_image_stylization.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
<td><a href="https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを見る</a></td>
</table>
magenta と次の発表のモデルコードに基づきます。
Exploring the structure of a real-time, arbitrary neural artistic stylization network. Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, Jonathon Shlens, Proceedings of the British Machine Vision Conference (BMVC), 2017.
セットアップ
はじめに、TF-2 とすべての関連する依存ファイルをインポートしましょう。
End of explanation
# @title Load example images { display-mode: "form" }
content_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/Golden_Gate_Bridge_from_Battery_Spencer.jpg/640px-Golden_Gate_Bridge_from_Battery_Spencer.jpg' # @param {type:"string"}
style_image_url = 'https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg' # @param {type:"string"}
output_image_size = 384 # @param {type:"integer"}
# The content image size can be arbitrary.
content_img_size = (output_image_size, output_image_size)
# The style prediction model was trained with image size 256 and it's the
# recommended image size for the style image (though, other sizes work as
# well but will lead to different results).
style_img_size = (256, 256) # Recommended to keep it at 256.
content_image = load_image(content_image_url, content_img_size)
style_image = load_image(style_image_url, style_img_size)
style_image = tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME')
show_n([content_image, style_image], ['Content image', 'Style image'])
Explanation: 使用する画像を取得しましょう。
End of explanation
# Load TF Hub module.
hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
hub_module = hub.load(hub_handle)
Explanation: TF Hub モジュールをインポートする
End of explanation
# Stylize content image with given style image.
# This is pretty fast within a few milliseconds on a GPU.
outputs = hub_module(tf.constant(content_image), tf.constant(style_image))
stylized_image = outputs[0]
# Visualize input images and the generated stylized image.
show_n([content_image, style_image, stylized_image], titles=['Original content image', 'Style image', 'Stylized image'])
Explanation: 画風に使用する Hub モジュールのシグネチャは、次のとおりです。
outputs = hub_module(content_image, style_image)
stylized_image = outputs[0]
上記の content_image、style_image、および stylized_image は、形状 [batch_size, image_height, image_width, 3] の 4-D テンソルです。
現在の例では 1 つの画像のみを提供するためバッチの次元は 1 ですが、同じモジュールを使用して、同時に複数の画像を処理することができます。
画像の入力と出力の値範囲は [0, 1] です。
コンテンツとスタイル画像の形状が一致する必要はありません。出力画像の形状はコンテンツ画像の形状と同一です。
画風の実演
End of explanation
# @title To Run: Load more images { display-mode: "form" }
content_urls = dict(
sea_turtle='https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg',
tuebingen='https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg',
grace_hopper='https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',
)
style_urls = dict(
kanagawa_great_wave='https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg',
kandinsky_composition_7='https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg',
hubble_pillars_of_creation='https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg',
van_gogh_starry_night='https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',
turner_nantes='https://upload.wikimedia.org/wikipedia/commons/b/b7/JMW_Turner_-_Nantes_from_the_Ile_Feydeau.jpg',
munch_scream='https://upload.wikimedia.org/wikipedia/commons/c/c5/Edvard_Munch%2C_1893%2C_The_Scream%2C_oil%2C_tempera_and_pastel_on_cardboard%2C_91_x_73_cm%2C_National_Gallery_of_Norway.jpg',
picasso_demoiselles_avignon='https://upload.wikimedia.org/wikipedia/en/4/4c/Les_Demoiselles_d%27Avignon.jpg',
picasso_violin='https://upload.wikimedia.org/wikipedia/en/3/3c/Pablo_Picasso%2C_1911-12%2C_Violon_%28Violin%29%2C_oil_on_canvas%2C_Kr%C3%B6ller-M%C3%BCller_Museum%2C_Otterlo%2C_Netherlands.jpg',
picasso_bottle_of_rum='https://upload.wikimedia.org/wikipedia/en/7/7f/Pablo_Picasso%2C_1911%2C_Still_Life_with_a_Bottle_of_Rum%2C_oil_on_canvas%2C_61.3_x_50.5_cm%2C_Metropolitan_Museum_of_Art%2C_New_York.jpg',
fire='https://upload.wikimedia.org/wikipedia/commons/3/36/Large_bonfire.jpg',
derkovits_woman_head='https://upload.wikimedia.org/wikipedia/commons/0/0d/Derkovits_Gyula_Woman_head_1922.jpg',
amadeo_style_life='https://upload.wikimedia.org/wikipedia/commons/8/8e/Untitled_%28Still_life%29_%281913%29_-_Amadeo_Souza-Cardoso_%281887-1918%29_%2817385824283%29.jpg',
derkovtis_talig='https://upload.wikimedia.org/wikipedia/commons/3/37/Derkovits_Gyula_Talig%C3%A1s_1920.jpg',
amadeo_cardoso='https://upload.wikimedia.org/wikipedia/commons/7/7d/Amadeo_de_Souza-Cardoso%2C_1915_-_Landscape_with_black_figure.jpg'
)
content_image_size = 384
style_image_size = 256
content_images = {k: load_image(v, (content_image_size, content_image_size)) for k, v in content_urls.items()}
style_images = {k: load_image(v, (style_image_size, style_image_size)) for k, v in style_urls.items()}
style_images = {k: tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME') for k, style_image in style_images.items()}
#@title Specify the main content image and the style you want to use. { display-mode: "form" }
content_name = 'sea_turtle' # @param ['sea_turtle', 'tuebingen', 'grace_hopper']
style_name = 'munch_scream' # @param ['kanagawa_great_wave', 'kandinsky_composition_7', 'hubble_pillars_of_creation', 'van_gogh_starry_night', 'turner_nantes', 'munch_scream', 'picasso_demoiselles_avignon', 'picasso_violin', 'picasso_bottle_of_rum', 'fire', 'derkovits_woman_head', 'amadeo_style_life', 'derkovtis_talig', 'amadeo_cardoso']
stylized_image = hub_module(tf.constant(content_images[content_name]),
tf.constant(style_images[style_name]))[0]
show_n([content_images[content_name], style_images[style_name], stylized_image],
titles=['Original content image', 'Style image', 'Stylized image'])
Explanation: 複数の画像で試してみる
End of explanation |
719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sparse GP Regression
14th January 2014 James Hensman
29th September 2014 Neil Lawrence (added sub-titles, notes and some references).
This example shows the variational compression effect of so-called 'sparse' Gaussian processes. In particular we show how using the variational free energy framework of Titsias, 2009 we can compress a Gaussian process fit. First we set up the notebook with a fixed random seed, and import GPy.
Step1: Sample Function
Now we'll sample a Gaussian process regression problem directly from a Gaussian process prior. We'll use an exponentiated quadratic covariance function with a lengthscale and variance of 1 and sample 50 equally spaced points.
Step2: Full Gaussian Process Fit
Now we use GPy to optimize the parameters of a Gaussian process given the sampled data. Here, there are no approximations, we simply fit the full Gaussian process.
Step3: A Poor `Sparse' GP Fit
Now we construct a sparse Gaussian process. This model uses the inducing variable approximation and initialises the inducing variables in two 'clumps'. Our initial fit uses the correct covariance function parameters, but a badly placed set of inducing points.
Step4: Notice how the fit is reasonable where there are inducing points, but bad elsewhere.
Optimizing Covariance Parameters
Next, we will try and find the optimal covariance function parameters, given that the inducing inputs are held in their current location.
Step5: The poor location of the inducing inputs causes the model to 'underfit' the data. The lengthscale is much longer than the full GP, and the noise variance is larger. This is because in this case the Kullback Leibler term in the objective free energy is dominating, and requires a larger lengthscale to improve the quality of the approximation. This is due to the poor location of the inducing inputs.
Optimizing Inducing Inputs
Firstly we try optimzing the location of the inducing inputs to fix the problem, however we still get a larger lengthscale than the Gaussian process we sampled from (or the full GP fit we did at the beginning).
Step6: The inducing points spread out to cover the data space, but the fit isn't quite there. We can try increasing the number of the inducing points.
Train with More Inducing Points
Now we try 12 inducing points, rather than the original six. We then compare with the full Gaussian process likelihood. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import GPy
import numpy as np
np.random.seed(101)
Explanation: Sparse GP Regression
14th January 2014 James Hensman
29th September 2014 Neil Lawrence (added sub-titles, notes and some references).
This example shows the variational compression effect of so-called 'sparse' Gaussian processes. In particular we show how using the variational free energy framework of Titsias, 2009 we can compress a Gaussian process fit. First we set up the notebook with a fixed random seed, and import GPy.
End of explanation
N = 50
noise_var = 0.05
X = np.linspace(0,10,50)[:,None]
k = GPy.kern.RBF(1)
y = np.random.multivariate_normal(np.zeros(N),k.K(X)+np.eye(N)*np.sqrt(noise_var)).reshape(-1,1)
Explanation: Sample Function
Now we'll sample a Gaussian process regression problem directly from a Gaussian process prior. We'll use an exponentiated quadratic covariance function with a lengthscale and variance of 1 and sample 50 equally spaced points.
End of explanation
m_full = GPy.models.GPRegression(X,y)
m_full.optimize('bfgs')
m_full.plot()
print m_full
Explanation: Full Gaussian Process Fit
Now we use GPy to optimize the parameters of a Gaussian process given the sampled data. Here, there are no approximations, we simply fit the full Gaussian process.
End of explanation
Z = np.hstack((np.linspace(2.5,4.,3),np.linspace(7,8.5,3)))[:,None]
m = GPy.models.SparseGPRegression(X,y,Z=Z)
m.likelihood.variance = noise_var
m.plot()
print m
Explanation: A Poor `Sparse' GP Fit
Now we construct a sparse Gaussian process. This model uses the inducing variable approximation and initialises the inducing variables in two 'clumps'. Our initial fit uses the correct covariance function parameters, but a badly placed set of inducing points.
End of explanation
m.inducing_inputs.fix()
m.optimize('bfgs')
m.plot()
print m
Explanation: Notice how the fit is reasonable where there are inducing points, but bad elsewhere.
Optimizing Covariance Parameters
Next, we will try and find the optimal covariance function parameters, given that the inducing inputs are held in their current location.
End of explanation
m.randomize()
m.Z.unconstrain()
m.optimize('bfgs')
m.plot()
Explanation: The poor location of the inducing inputs causes the model to 'underfit' the data. The lengthscale is much longer than the full GP, and the noise variance is larger. This is because in this case the Kullback Leibler term in the objective free energy is dominating, and requires a larger lengthscale to improve the quality of the approximation. This is due to the poor location of the inducing inputs.
Optimizing Inducing Inputs
Firstly we try optimzing the location of the inducing inputs to fix the problem, however we still get a larger lengthscale than the Gaussian process we sampled from (or the full GP fit we did at the beginning).
End of explanation
Z = np.random.rand(12,1)*12
m = GPy.models.SparseGPRegression(X,y,Z=Z)
m.optimize('bfgs')
m.plot()
m_full.plot()
print m.log_likelihood(), m_full.log_likelihood()
Explanation: The inducing points spread out to cover the data space, but the fit isn't quite there. We can try increasing the number of the inducing points.
Train with More Inducing Points
Now we try 12 inducing points, rather than the original six. We then compare with the full Gaussian process likelihood.
End of explanation |
720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coding in Python3
So now that PmagPy has made the conversion to python3, for at least a short time the command line programs will be supported in both Python2 and Python3 using the library future which can be installed by in your favorite package manager (canopy, anaconda) or using this code in the command line pip install future. This is not true for the GUIs, however, which due to their dependency on the wxpython library must be in one language or the other. For the sake of future proofing the library all GUI related code needs to be in Python3 as soon many of the scientific libraries (ipython, matplotlib, pandas) required by PmagPy are dropping support for Python2. A full list of libraries dropping support for Python2 by 2020 can be found here.
Python3 vs. Python2
There are a number of differences between the two programming languages, which while not completely unrelated are disparate. This guide will go over those changes most relevant to PmagPy development as follows
Step1: Relative Imports
<a id='import_explanation'></a>
http
Step2: Exception Raising/Catching
<a id='exception_explanation'></a>
This is simply a syntax change in the raising and catching syntax.
Step3: Strings
<a id='string_explanation'></a>
One of the most unnoticed and important changes between Python2 and Python3 is the difference in Strings and how they are encoded. Def
Step4: Generators vs. Lists vs. Tuples
<a id='generator_list_explanation'></a>
This is a pain in the rear of a change. As the python vision is to be as explicit as possible and make objects for everything, Python3 is extremely explicit on what things are what objects and there are a lot more objects. For instance in Python2 range(4) returns a list [0,1,2,3] where as in Python3 range(4) returns a range object which is iterable and decended from the generator class, but does not have the same methods as a list so you cannot try append to it, and has more methods than the generator class.
Step5: Explination of the difference between Generators, Lists, and Tuples
Note
Step6: input vs. raw_input
<a id='input_explanation'></a>
Step8: Division
<a id='division_explanation'></a>
This is a rather subtle change to Python and can often go unnoticed as it doesn't raise an error. The main change is that in Python2 int/int = int every time, however in python3 int/int = int_or_float. Here are some various examples of both division (/) and whole number division (//). | Python Code:
#python2 syntax, now throws an error
print "hello world"
#python3 syntax, this also works in python2 (2.5+) though in python3 this is the only option
print("hello world")
#documentation on the python3 print function
help(print)
Explanation: Coding in Python3
So now that PmagPy has made the conversion to python3, for at least a short time the command line programs will be supported in both Python2 and Python3 using the library future which can be installed by in your favorite package manager (canopy, anaconda) or using this code in the command line pip install future. This is not true for the GUIs, however, which due to their dependency on the wxpython library must be in one language or the other. For the sake of future proofing the library all GUI related code needs to be in Python3 as soon many of the scientific libraries (ipython, matplotlib, pandas) required by PmagPy are dropping support for Python2. A full list of libraries dropping support for Python2 by 2020 can be found here.
Python3 vs. Python2
There are a number of differences between the two programming languages, which while not completely unrelated are disparate. This guide will go over those changes most relevant to PmagPy development as follows:
Print function vs print statement
Relative imports
Exception raising/catching
Strings
Generator vs. List vs. Iterable Objects(range, map, filter, {}.keys(), {}.values, {}.items())
input vs. raw_input
Division
This is by no means a full list of the differences between the languages and a more comprehensive list can be found here. The goal of this list is to be concise and simple for those developing PmagPy rather than thorough.
Note: if you already have Python2 code you would like to see incorporated into the GUI see Converting Python2 Code
Print Function vs. Print Statement
<a id='print_explanation'></a>
This is the most simple of the differences between Python2 and Python3 and the most commonly encountered. Simply the print statement in Python3 is only a function not a statement like in Python2. This means that there is no special syntax for print and it must be called as a function would. This also means that there are key word arguments for print to allow better manipulation of text.
End of explanation
# relative import in Python 2 only:
import submodule2
# relative import in Python 2 and 3:
from . import submodule2
# However, absolute imports are (in my opinion) easier and simpler.
# Absolute imports mean you specify the entire package name, i.e.:
from my_package import submodule2
# or
import my_package.submodule2 as mod2
# so feel free to stick to using them, instead
Explanation: Relative Imports
<a id='import_explanation'></a>
http://python-future.org/compatible_idioms.html#imports-relative-to-a-package
End of explanation
#Python2 rasing an error
raise TypeError, "Expected a diblock was given type(%s)"%str(type("")) #gives SyntaxError no the TypeError we wanted
#Python3 raising an error
raise TypeError("Expected a diblock was given type(%s)"%str(type("")))#gives the appropriate TypeError
#Python2 catching an error
try:
raise RuntimeError; print("obviously not caught")
except RuntimeError, err:
print(err, "caught the error: note there's no message for the error")
#Python3 catching an error
try:
raise RuntimeError; print("obviously not caught")
except RuntimeError as err:
print(err, "caught the error: note there's no message for the error")
Explanation: Exception Raising/Catching
<a id='exception_explanation'></a>
This is simply a syntax change in the raising and catching syntax.
End of explanation
#Some Python3 examples of Unicode vs. Bytes
print('strings are now utf-8 \u03BCnico\u0394é!', type(''))
print(b'bytes are a thing now too and when turned into a string keep this b in front', type(b' bytes for storing data'))
Explanation: Strings
<a id='string_explanation'></a>
One of the most unnoticed and important changes between Python2 and Python3 is the difference in Strings and how they are encoded. Def: encoding - the manner in which the bits are organized to represent a given piece of information to the computer, in this case string characters. Python2 used ASCII strings by default and had a class Unicode, where as Python3 uses Unicode as the main string class (str) and has two other string classes byte and bytearray which are used to represent binary data. This mostly causes a problem in PmagPy when reading binary data using open(file,'b') as it will now be read in as a byte object which cannot be manipulated like a string. This can be seen in the 2G binary conversion script in the programs directory. This can also be a problem when using libraries like json as the library may read in using a different encoding like ASCII and need to be decoded to turn into the correct string. An example of this can be seen in data_model3 in the pmagpy directory. This is rather case by case and something only occasionally run into, hopefully the work arounds in data_model3 and the 2G binary conversion script can help you overcome most of these issues.
End of explanation
print("python3 range output")
p3r=range(4)
print(p3r,type(p3r),'\n')
print("casting range object to list to simulate python2 output")
p2r=list(range(4))
print(p2r,type(p2r))
#Other examples
print("python3 map output")
p3m = map(lambda x: x+1, range(4))
print(p3m, type(p3m))
print(list(p3m),type(list(p3m)),'\n')
print("python3 filter output")
p3f = filter(lambda x: (x%2)==0, range(4))
print(p3f, type(p3f))
print(list(p3f),type(list(p3f)),'\n')
print("python3 dictionary methods with new classes for output")
p3d = {"thing1": 1, "thing2": "hello world", "thing3": 3.75, 5.47: "another value"}
print(p3d, type(p3d),'\n')
p3dk = p3d.keys()
p3dv = p3d.values()
p3di = p3d.items()
print(p3dk, type(p3dk))
print(list(p3dk),type(list(p3dk)),'\n')
print(p3dv, type(p3dv))
print(list(p3dv),type(list(p3dv)),'\n')
print(p3di, type(p3di))
print(list(p3di),type(list(p3di)),'\n')
Explanation: Generators vs. Lists vs. Tuples
<a id='generator_list_explanation'></a>
This is a pain in the rear of a change. As the python vision is to be as explicit as possible and make objects for everything, Python3 is extremely explicit on what things are what objects and there are a lot more objects. For instance in Python2 range(4) returns a list [0,1,2,3] where as in Python3 range(4) returns a range object which is iterable and decended from the generator class, but does not have the same methods as a list so you cannot try append to it, and has more methods than the generator class.
End of explanation
def make_gen():
x=0
while True:
yield x
x+=1
gen=make_gen() #makes a generator that returns all non-negative integers
print(gen, type(gen))
print(next(gen)) #note in Python2 this was done gen.next() though in Python3 next is an external function not a method
print([next(gen) for _ in range(20)])
print(next(gen))
print(hasattr(gen,'__next__'),'\n')
ran20 = range(20) #returns a range object which is related to a generator, but different as it has no next, remembers data, and can be indexed
print(ran20, type(ran20))
print(ran20[0])
print(list(ran20))
print(ran20[0])
print(hasattr(ran20,'__next__'),'\n') #this means you can't call next(ran20)
#manipulating this object and remembering it's difference is often quite a pain so it is sometimes better to just turn it into a list
#This is meant to demonstrate how lists mutate and show the difference betweeen tuples and lists
print("A = list(range(5)) : creates a list of the first 5 non-negative integers")
A = list(range(5))
print("B = A : makes B point to A")
B = A
print("C = list(A) : makes a copy of A in C")
C = list(A)
print("A = ", A)
print("B = ", B)
print("C = ", C, '\n')
print('B[0] = "haha" : Notice how a change to B also changes A')
B[0] = "haha"
print("A = ", A)
print("B = ", B)
print("C = ", C, '\n')
print("A[2] = 5.938 : and vice versa, this is one aspect of mutation")
A[2] = 5.938
print("A = ", A)
print("B = ", B)
print("C = ", C, '\n')
print("C[4] = True : Though C which is a copy not a pointer is not changed")
C[4] = True
print("A = ", A)
print("B = ", B)
print("C = ", C, '\n')
#you can check these kind of things using the "is" statement without needing to go through all the above changes
print("reset A, B, C as in first step")
A = list(range(5)) #creates a list of the first 5 non-negative integers
B = A #makes B point to A
C = list(A) #makes a copy of A in C
print("A is B : ", A is B)
print("A is C : ", A is C)
print("A==B==C : ", A==B==C)
#Trying the above exercise again with Tuples to demonstrate immutability
print("A = tuple(range(5)) : creates a list of the first 5 non-negative integers")
A = tuple(range(5))
print("B = A : makes B a copy of A")
B = A
print("C = tuple(A) : makes C a copy of A")
C = tuple(A)
print("A = ", A)
print("B = ", B)
print("C = ", C, '\n')
#again we can use is to determine what is a copy and what is identical
print("A is B : ", A is B)
print("A is C : ", A is C)
print("A==B==C : ", A==B==C)
print("in this case all are identical as there is no need to make a copy of a tuple as it can't change\n")
print('B[0] = "haha"')
print("And ERROR, because you can't do this to a tuple which prevents the headache above from developing")
B[0] = "haha"
Explanation: Explination of the difference between Generators, Lists, and Tuples
Note: If you plan to just turn all of the new above mentioned data types into lists then you can probably safely skip this, but the bellow ilistrates important distinctions between the different iterable types in both Python2 and Python3 and should help you write clearer code with the right objects used for all cases.
This difference brings up a conversation on the 3 different types of objects which contains sets of data as they can be found in both Python2 and Python3: generators, lists, and tuples. Generators are objects which have a current state and a defined next operation (i.e. x=0, x+1) this allows you to define infinite sets or large sets without storing all the data in memory. Lists are built-in arrays which contain each piece of data in RAM and can access them as requested by the user (i.e. [0,2,3,4,5]), most importantly lists are mutable so their values can be changed even in different namespaces than they were created. Tuples are nearly identical to lists, however, they are imutable and they must be recreated to change even a single value. This distinction is more important in Python3 than Python2 as many of the data types returned from the built-in functions above are decended from the generator class or the tuple class instead of just returning a list as Python2 does. Here are some examples of a basic generator, list, and tuple.
End of explanation
# Python 2
# does not exist in Python 3
raw_input('give me a variable')
# Python 3
# this is equivalent to Python 2's raw_input and is safe (i.e., will not be evaluated by Python)
input('give me a number')
Explanation: input vs. raw_input
<a id='input_explanation'></a>
End of explanation
#Python2 example output
example=\
3 / 2 = 1
3 // 2 = 1
3 / 2.0 = 1.5
3 // 2.0 = 1.0
print(example)
#Python3 test of Python2 example code
print("3 / 2 =", 3 / 2)
print("3 // 2 =", 3 // 2)
print("3 / 2.0 =", 3 / 2.0)
print("3 // 2.0 =", 3 // 2.0)
Explanation: Division
<a id='division_explanation'></a>
This is a rather subtle change to Python and can often go unnoticed as it doesn't raise an error. The main change is that in Python2 int/int = int every time, however in python3 int/int = int_or_float. Here are some various examples of both division (/) and whole number division (//).
End of explanation |
721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A genertor function, which acts as an iterator, provides memory-efficient way to generate a large sequence of data, such as an array with size of 100,000. We have 3 different methods to invoke generators in Python.
use yield
Step1: use build-in generator functions, such as xrange()
we can also do generator comprehension.
Step2: To implement a iterator, we have to define a class with two specific functions | Python Code:
def firstn(n):
num = 0
while num < n:
yield num
num += 1
print(sum(firstn(1000000)))
Explanation: A genertor function, which acts as an iterator, provides memory-efficient way to generate a large sequence of data, such as an array with size of 100,000. We have 3 different methods to invoke generators in Python.
use yield
End of explanation
y = (x*2 for x in [1,2,3,4,5])
print y.next()
Explanation: use build-in generator functions, such as xrange()
we can also do generator comprehension.
End of explanation
import random
class randomwalker_iter:
def __init__(self):
self.last = 1
self.rand = random.random()
def __iter__(self):
return self
def next(self):
if self.rand < 0.1:
raise StopIteration
else:
while abs(self.rand - self.last) < 0.4:
self.rand = random.random()
self.last = self.rand
return self.rand
rw = randomwalker_iter()
for rw_instance in rw:
print rw_instance
Explanation: To implement a iterator, we have to define a class with two specific functions: 1) __iter__ method that returns a iterator (self); 2) next method to generate next instance when the iterator is called in the for loop.
End of explanation |
722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
Step1: A simple bottleneck
In order to change population size, one simply has to change the values in the "nlist". For example, here is a population bottleneck
Step2: Please note the last command, which changes the concatenated array from an array of 64 bit signed integers to 32 bit unsigned integers.
Exponential growth
Now, let's do population growth, where we evolve for 10N generations, and then grow the population five fold in the next 500 generations. | Python Code:
%matplotlib inline
%pylab inline
from __future__ import print_function
import numpy as np
import array
import matplotlib.pyplot as plt
#population size
N=1000
#nlist corresponds to a constant population size for 10N generations
#note the "dtype" argument. Without it, we'd be defaulting to int64,
#which is a 64-bit signed integer.
nlist=np.array([N]*(10*N),dtype=np.uint32)
#This is a 'view' of the array starting from the beginning:
nlist[0:]
Explanation: Example: modeling changes in population size
Simple example
Let's look at an example:
End of explanation
#Evolve for 10N generations,
#bottleneck to 0.25N for 100 generations,
#recover to N for 50 generations
nlist = np.concatenate(([N]*(10*N),[int(0.25*N)]*100,[N]*50)).astype(np.int32)
plt.plot(nlist[0:])
plt.ylim(0,1.5*N)
Explanation: A simple bottleneck
In order to change population size, one simply has to change the values in the "nlist". For example, here is a population bottleneck:
End of explanation
import math
N2=5*N
tgrowth=500
#G is the growth rate
G = math.exp( (math.log(N2)-math.log(N))/float(tgrowth) )
nlist = np.array([N]*(10*N+tgrowth),dtype=np.uint32)
#Now, modify the list according to expoential growth rate
for i in range(tgrowth):
nlist[10*N+i] = round( N*math.pow(G,i+1) )
##Now, we see that the population does grown from
##N=1,000 to N=5,000 during the last 500 generations
## We need the + 1 below to transform
## from the generation's index to the generation itself
plt.plot(range(10*N+1,10*N+501,1),nlist[10*N:])
Explanation: Please note the last command, which changes the concatenated array from an array of 64 bit signed integers to 32 bit unsigned integers.
Exponential growth
Now, let's do population growth, where we evolve for 10N generations, and then grow the population five fold in the next 500 generations.
End of explanation |
723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Questions
Step1: For the purpose of cleaning up the data I determined that the Name and Ticket columns were not necessary for my future analysis.
Step2: Using .info and .describe, I am able to get a quick overview of what the data set has to offer and if anything stands out. In this instance we can see the embarked number is less then the number of passengers, but this will not be an issue for my analysis.
Step3: For my next few blocks of code and graphs I will be looking at two groups of individuals on the boat, Male and Female. To make the analysis a litle easier I created two variables that would define all males and all females on the boat.
Step4: Using those two data sets that I created in the previous block, I printed the counts to understand how many of each sex were on the boat.
Step5: For this section I utilized Seaborn's factorplot function to graph the count of male's and females in each class.
Step6: To begin answering my first question of who has a higer probability of surviving I created two variables, men_prob and women_prob. From there I grouped by sex and survival then taking the mean and thn printing out each statement.
Step7: To visually answer the questions of what sex had a higher probability of surviving I utitlized the factorplot function with seaborn to map the sex, and survived in the form of a bar graph. I also incudled a y-axis label for presentaiton.
Step8: To answer my section question of what the age range range was of survivors vs. non-survivors I first wanted to see the distribution of age acorss the board. To do this I used the histogram function as well as printed the median age.
To validate the finding that Females do have a higher probability of surviving over Males, I will be applying stitastical analysis, chi-squared test, to gain the necessary understanding. My findings and code are bellow.
Step9: Chi-square value
Step10: To answer my second questoins I showed survived data with age in a box plot to show average age as well as its distruvtion for both deseased and survived.
Step11: To tackle the third question of what is the probability as well as who has a higher probability of survivng, being Alone or in a Family. I first went ahead and greated my function that would return True if the number of people reported was above 0 (Family) and Fale if it was not (Alone). Bellow you can see the function as well as the new column crated with the True and False statements.
Step12: To now show the probability visually as well as its output I have gone ahead and created a factorplot as well as printed out the probabilities of the two (Alone = False and Family = True). To get the probabilities I had to divide the sum of survivors by family type and divide by the count of family type.
Step13: Finally, to answer my last question of if being in a higher class affected the probability of you surviving, I used the same seaborn factorplot but to create this graph i had to take the sum of survivors and divide them by the count of survivors in each class. | Python Code:
##import everything
import numpy as np
import pandas as pd
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sea
%matplotlib inline
sea.set(style="whitegrid")
titanic_ds = pd.read_csv('titanic-data.csv')
Explanation: Questions:
1: What sex has a higher probability of surviving?
2: What was the age range of people who survived and didn't?
3: Was the probability of surviving higher with a family or alone?
4: Did the individuals Pclass affect their probability of surviving?
I imported all necessary libraries and directories including the .csv file. I also made sure to inclue inline matphotlibrary and setting seaborn style to white. this is to make sure all my graphs will not only show up in the notebook but also will be with a white bagkround to make it more readable.
End of explanation
##dont need name and ticket for what I am going to be tackeling
titanic_ds = titanic_ds.drop(["Name", "Ticket"], axis=1)
Explanation: For the purpose of cleaning up the data I determined that the Name and Ticket columns were not necessary for my future analysis.
End of explanation
##over view of data
titanic_ds.info()
titanic_ds.describe()
Explanation: Using .info and .describe, I am able to get a quick overview of what the data set has to offer and if anything stands out. In this instance we can see the embarked number is less then the number of passengers, but this will not be an issue for my analysis.
End of explanation
##defining men and women from data
men_ds = titanic_ds[titanic_ds.Sex == 'male']
women_ds = titanic_ds[titanic_ds.Sex == 'female']
Explanation: For my next few blocks of code and graphs I will be looking at two groups of individuals on the boat, Male and Female. To make the analysis a litle easier I created two variables that would define all males and all females on the boat.
End of explanation
# idea of the spread between men and women
print("Males: ")
print(men_ds.count()['Sex'])
print("Females: ")
print(women_ds.count()['Sex'])
Explanation: Using those two data sets that I created in the previous block, I printed the counts to understand how many of each sex were on the boat.
End of explanation
##Gender distribution by class
gender_class= sea.factorplot('Pclass',order=[1,2,3], data=titanic_ds, hue='Sex', kind='count')
gender_class.set_ylabels("count of passengers")
Explanation: For this section I utilized Seaborn's factorplot function to graph the count of male's and females in each class.
End of explanation
##Probability of Survival by Gender
men_prob = men_ds.groupby('Sex').Survived.mean()
women_prob = women_ds.groupby('Sex').Survived.mean()
print("Male ability to survive: ")
print(men_prob[0])
print("Women ability to survive: ")
print(women_prob[0])
Explanation: To begin answering my first question of who has a higer probability of surviving I created two variables, men_prob and women_prob. From there I grouped by sex and survival then taking the mean and thn printing out each statement.
End of explanation
sbg = sea.factorplot("Sex", "Survived", data=titanic_ds, kind="bar", ci=None, size=5)
sbg.set_ylabels("survival probability")
Explanation: To visually answer the questions of what sex had a higher probability of surviving I utitlized the factorplot function with seaborn to map the sex, and survived in the form of a bar graph. I also incudled a y-axis label for presentaiton.
End of explanation
print("Total Count of Males and Females on ship: ")
print(titanic_ds.count()['Sex'])
print("Total Males:")
print(men_ds.count()['Sex'])
print("Males (Survived, Deseased): ")
print(men_ds[men_ds.Survived == 1].count()['Sex'], men_ds[men_ds.Survived == 0].count()['Sex'])
print("Total Women:")
print(women_ds.count()['Sex'])
print("Females (Survived, Deseased): ")
print(women_ds[women_ds.Survived == 1].count()['Sex'], women_ds[women_ds.Survived == 0].count()['Sex'])
men_women_survival = np.array([[men_ds[men_ds.Survived == 1].count()['Sex'], men_ds[men_ds.Survived == 0].count()['Sex']],[women_ds[women_ds.Survived == 1].count()['Sex'], women_ds[women_ds.Survived == 0].count()['Sex']]])
print(men_women_survival)
# Chi-square calculations
sp.stats.chi2_contingency(men_women_survival)
Explanation: To answer my section question of what the age range range was of survivors vs. non-survivors I first wanted to see the distribution of age acorss the board. To do this I used the histogram function as well as printed the median age.
To validate the finding that Females do have a higher probability of surviving over Males, I will be applying stitastical analysis, chi-squared test, to gain the necessary understanding. My findings and code are bellow.
End of explanation
##Distribution of age; Median age 28.0
titanic_ds['Age'].hist(bins=100)
print ("Median Age: ")
print titanic_ds['Age'].median()
Explanation: Chi-square value: 260.71702016732104
p-value: 1.1973570627755645e-58
degrees of freedom: 1
expected frequencies table:
221.47474747, 355.52525253
120.52525253, 193.47474747
Given the p-value is 1.1973570627755645e-58 (.011973570627755645e-58) is less than the significance level of .05, there is an indicuation that there is a relationtion between gender and survivability. That means we accept the alternative hypothesis that gender and survivability are dependant of each other.
End of explanation
##Age box plot, survived and did not survive
##fewer people survived as compared to deseased
age_box=sea.boxplot(x="Survived", y="Age", data=titanic_ds)
age_box.set(xlabel = 'Survived', ylabel = 'Age', xticklabels = ['Desased', 'Survived'])
Explanation: To answer my second questoins I showed survived data with age in a box plot to show average age as well as its distruvtion for both deseased and survived.
End of explanation
titanic_ds['Family']=(titanic_ds.SibSp + titanic_ds.Parch > 0)
print titanic_ds.head()
Explanation: To tackle the third question of what is the probability as well as who has a higher probability of survivng, being Alone or in a Family. I first went ahead and greated my function that would return True if the number of people reported was above 0 (Family) and Fale if it was not (Alone). Bellow you can see the function as well as the new column crated with the True and False statements.
End of explanation
fanda = sea.factorplot('Family', "Survived", data=titanic_ds, kind='bar', ci=None, size=5)
fanda.set(xticklabels = ['Alone', 'Family'])
print ((titanic_ds.groupby('Family')['Survived'].sum()/titanic_ds.groupby('Family')['Survived'].count()))
Explanation: To now show the probability visually as well as its output I have gone ahead and created a factorplot as well as printed out the probabilities of the two (Alone = False and Family = True). To get the probabilities I had to divide the sum of survivors by family type and divide by the count of family type.
End of explanation
sea.factorplot('Pclass', "Survived", data=titanic_ds, kind='bar', ci=None, size=5)
PS=(titanic_ds.groupby('Pclass')['Survived'].sum())
PC=(titanic_ds.groupby('Pclass')['Survived'].count())
print ("Class Survivability: ")
print (PS/PC)
Explanation: Finally, to answer my last question of if being in a higher class affected the probability of you surviving, I used the same seaborn factorplot but to create this graph i had to take the sum of survivors and divide them by the count of survivors in each class.
End of explanation |
724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Translation of products
This is the code for translation of products. In Prestashop, we have products in Thai and English description. We need to move the translation database to Woocommerce. Woocommerce uses "WPML Multilingual CMS" plugin. After activate, this plugin will create a database called "wp_icl_translation". The "trid" column is used to match the Thai and English version of the same product.
Our process is to create a English description product in "wp_posts" and "wp_postmeta" then match the "trid" of Thai and English product in "wp_icl_translations".
There are 3 parts.
Creating the "wp_posts" for English description products.
Creating the "wp_postmeta" for English description products.
Matching the "trid" of Thai and English product in the "wp_icl_translations"
Creating the "wp_posts"
Our process here is to create wp_posts for English description product. We create new "wp_posts" with contain English product by copy the Thai products and fill them with English descriptions. Change ID and index and concatenate it with the old "wp_posts". Upload new "wp_posts" to Woocommerce. Woocommerce will show you more products (because we copy from Thai product, backend will show 2 x number of products).
Filling the English "Description" and "Short description".
Step1: We load the data. There are
1) "ps_product_lang.csv" the description and short description in Thai and English from Prestashop.
2) "wp_posts.csv" Woocommerce database consists of everything posting on the site (products, images and more).
Step2: We select only the English description from Prestashop database.
Step3: Next step, we fill the "Description" & "Short description" in "wp_posts".
Sorting the ID first.
Step4: "wp_posts" has many post types. We extract only "product" type.
Step5: Change column name for merge. Then we merge "eng_des" and "df_description" by name of a products. This will ensure that we match the right description to the right product.
Step6: Fill wp_post new "Description and "Short description". Then drop the unuse
Step7: There are some duplicate entry because some entry share the same name.
Step8: Changing "name" column back to "post_title". Protect the error when concatenating (after concatenate, it will have 24 column).
Step9: Change some error strings.
Some of the strings in English description are not in English. We need to change them.
First, we create empty value series and use it to collect new description data.
Step10: Then we run a for loop to replace a string.
Step11: Change ID and Index number.
Step12: Create "wp_posts" entry for images.
We have already made "wp_posts" entry for products. If we concatenate this to the old "wp_posts" and upload to Woocommerce. The new English description product can't find the images. We solve this problem by create the images entry for "wp_posts".
The process here is to copy the old images "wp_posts", change their "post_parent" to match image to the right product. Then concatenate to the old "wp_posts".
First, we extract only "attachment" post_type.
Step13: Some images entry don't have a parent product. We select only the entries that have "post_parent" value.
Step14: Let's check by counting "post_mime_type" and "post_parent". "post_mime_type" is a type of file such as ".jpg", ".png". Use value_counts() will show the type of each file in the data and how number of them in each category. Counting the "post_parent" show nymber of images in each ID.
Step15: Now we have to change "post_parent" to match the image to the right product. We use Python's dictionary to translate from old "post_parent" to a new one.
Extract old and new ID into variable.
Step16: Some post_parent doesn't appear in dictionary. Pick only the one we have in Thai ID.
Step17: Check for nan value and drop rows that have nan English ID.
Step18: Create a dictionary.
Step19: Translate old to new "ID"
Step20: Set new ID and index number.
Step21: Concatenate to "wp_posts"
Concatenate "wp_posts" and "eng_des" to the old wp_posts.
Step22: Arrange index.
Step23: Export new raw_product and upload to Woocommerce.
Step24: Creating the "wp_postmeta"
Our process here is to create "wp_postmeta". "wp_post_meta" is a table collecting the detail of each post. In example "wp_postmeta" of product stores a SKU, price, weight and other information about product. Woocommerce isn't working properly without "wp_postmeta", so after we have already create "wp_posts", we have to generate "wp_postmeta".
We load "wp_postmeta.csv" that is the detail of each wp_post ID. We load wp_postmeta into 2 variable.
1) "wp_postmeta" is use in the end to concatenate with a new English product's "wp_postmeta" or "meta" variable.
2) "meta" is a new English product's "wp_postmeta"
Step25: "meta" still have other posts. We need to extract only product, so we bring "meta" with the same ID as Thai products.
Step26: Translate old to new "ID".
Step27: Set new ID and index number.
Step28: Concatenate with the old wp_posts.
Step29: Arrange index.
Step30: For a comfortable sorting the product in backend, We change the "SKU" into new format.
Step31: Export to .csv
Step32: Creating the "wp_icl_translations"
Loading the "wp_icl_translations". Use wp_icl_translations to link Thai & Eng product description.
Step33: Merging Thai & English Description to wp_icl_translations to make sure we match the correct ID.
Step34: First, we merge Thai product to "wp_icl_translations".
Step35: Then we merge it with English product description by "post_title". The reason we use "post_title" as a key because Thai and English product don't share the same ID.
Step36: Drop rows that English ID has a nan value.
Step37: Using the value_counts() to find duplicate values. If there are duplicate values, we drop it.
Step38: Create a series with the size of English products.
Step39: Creating a new dataframe in the format of wp_icl_translations.
Step40: Set new index for dataframe.
Step41: Concatenating dataframe to "translations" and name it as wp_icl_translations
Step42: "source_language_code" must fill with "NULL" in order to make Woocommerce working properly.
Step43: Export to .csv | Python Code:
import pandas as pd
import numpy as np
Explanation: Translation of products
This is the code for translation of products. In Prestashop, we have products in Thai and English description. We need to move the translation database to Woocommerce. Woocommerce uses "WPML Multilingual CMS" plugin. After activate, this plugin will create a database called "wp_icl_translation". The "trid" column is used to match the Thai and English version of the same product.
Our process is to create a English description product in "wp_posts" and "wp_postmeta" then match the "trid" of Thai and English product in "wp_icl_translations".
There are 3 parts.
Creating the "wp_posts" for English description products.
Creating the "wp_postmeta" for English description products.
Matching the "trid" of Thai and English product in the "wp_icl_translations"
Creating the "wp_posts"
Our process here is to create wp_posts for English description product. We create new "wp_posts" with contain English product by copy the Thai products and fill them with English descriptions. Change ID and index and concatenate it with the old "wp_posts". Upload new "wp_posts" to Woocommerce. Woocommerce will show you more products (because we copy from Thai product, backend will show 2 x number of products).
Filling the English "Description" and "Short description".
End of explanation
#Description from Prestashop
df_description = pd.read_csv('sql_prestashop/ps_product_lang.csv', index_col=False)
#wp_posts from Woocommerce
wp_posts = pd.read_csv('sql_prestashop/wp_posts.csv', index_col=False)
Explanation: We load the data. There are
1) "ps_product_lang.csv" the description and short description in Thai and English from Prestashop.
2) "wp_posts.csv" Woocommerce database consists of everything posting on the site (products, images and more).
End of explanation
#Use only English "Description" & "Short description" from Prestashop.
df_description = df_description[df_description['id_lang'] == 1]
Explanation: We select only the English description from Prestashop database.
End of explanation
eng_des = wp_posts.sort_values('ID')
Explanation: Next step, we fill the "Description" & "Short description" in "wp_posts".
Sorting the ID first.
End of explanation
eng_des = eng_des[eng_des['post_type'] == 'product']
Explanation: "wp_posts" has many post types. We extract only "product" type.
End of explanation
#Change column name for merging dataframe.
eng_des = eng_des.rename(columns = {'post_title':'name'})
#merge.
eng_des = pd.merge(eng_des, df_description[['name', 'description', 'description_short']], how='left', on='name')
Explanation: Change column name for merge. Then we merge "eng_des" and "df_description" by name of a products. This will ensure that we match the right description to the right product.
End of explanation
eng_des['post_excerpt'] = eng_des['description_short']
eng_des['post_content'] = eng_des['description']
#Drop unused column.
eng_des = eng_des.drop(['description', 'description_short'], axis=1)
Explanation: Fill wp_post new "Description and "Short description". Then drop the unuse
End of explanation
#Check for duplicate entry.
count = eng_des['name'].value_counts()
#Drop duplicate name of product.
eng_des = eng_des.drop_duplicates(subset='name')
#Create series for Thai ID.
th_id = eng_des['ID']
Explanation: There are some duplicate entry because some entry share the same name.
End of explanation
#Changing "name" column back to "post_title". Protect the error when concatenating
#(After concatenate, it will have 24 column).
eng_des = eng_des.rename(columns = {'name':'post_title'})
Explanation: Changing "name" column back to "post_title". Protect the error when concatenating (after concatenate, it will have 24 column).
End of explanation
empty = np.empty(eng_des.shape[0])
description = pd.Series(empty)
description[0] = ''
Explanation: Change some error strings.
Some of the strings in English description are not in English. We need to change them.
First, we create empty value series and use it to collect new description data.
End of explanation
for i in range(0, eng_des.shape[0]):
string = eng_des['post_content'].iloc[i]
if pd.isnull(string) == False:
string = string.replace("เรื่องย่อ", "Sypnosis")
string = string.replace("ส่วนที่อยากบอก", "Artist said")
string = string.replace("ส่วนที่ผู้จัดทำอยากบอก", "Artist said")
string = string.replace("ช่องทางการติดต่อ", "Contact")
string = string.replace("ผู้จัดทำ", "Artist")
string = string.replace("จากเรื่อง", "Parody")
string = string.replace("อ้างอิงจากเรื่อง", "Parody")
string = string.replace("ลักษณะของสินค้า", "Details of the product")
string = string.replace("ออกขายครั้งแรก", "Publication date")
string = string.replace("ชื่อสินค้า", "Product")
string = string.replace("ชื่อผลงาน", "Product")
description[i] = string
eng_des['post_content'] = description
Explanation: Then we run a for loop to replace a string.
End of explanation
#Find max "ID" of wp_posts. Use max ID + 1 to be the starting ID of English product.
max_id = wp_posts['ID'].max()
#Create Series of number as a new "ID".
eng_start = max_id + 1
eng_end = eng_start + eng_des.shape[0]
number = pd.Series(range(eng_start, eng_end))
#Reset eng_des index
eng_des = eng_des.reset_index()
#Drop old index column.
eng_des = eng_des.drop(['index'], axis=1)
eng_des['ID'] = number
#Create Series od number as a new index.
max_index = wp_posts.index.values.max()
eng_index_start = max_index + 1
eng_index_end = eng_index_start + eng_des.shape[0]
number = pd.Series(range(eng_index_start, eng_index_end))
eng_des['number'] = number
eng_des = eng_des.set_index(number)
#Drop unused column.
eng_des = eng_des.drop(['number'], axis=1)
Explanation: Change ID and Index number.
End of explanation
#Generate wp_posts for images
image = wp_posts[wp_posts['post_type'] == 'attachment']
Explanation: Create "wp_posts" entry for images.
We have already made "wp_posts" entry for products. If we concatenate this to the old "wp_posts" and upload to Woocommerce. The new English description product can't find the images. We solve this problem by create the images entry for "wp_posts".
The process here is to copy the old images "wp_posts", change their "post_parent" to match image to the right product. Then concatenate to the old "wp_posts".
First, we extract only "attachment" post_type.
End of explanation
image = image[image['post_parent'] != 0]
Explanation: Some images entry don't have a parent product. We select only the entries that have "post_parent" value.
End of explanation
#Check type of "post_mime_type"
count = image['post_mime_type'].value_counts()
#Check how many products.
count = image['post_parent'].value_counts()
Explanation: Let's check by counting "post_mime_type" and "post_parent". "post_mime_type" is a type of file such as ".jpg", ".png". Use value_counts() will show the type of each file in the data and how number of them in each category. Counting the "post_parent" show nymber of images in each ID.
End of explanation
new = eng_des['ID'].reset_index()
new = new.drop(['index'], axis=1)
new = new['ID']
old = th_id.reset_index()
old = old.drop(['index'], axis=1)
old = old['ID']
Explanation: Now we have to change "post_parent" to match the image to the right product. We use Python's dictionary to translate from old "post_parent" to a new one.
Extract old and new ID into variable.
End of explanation
#reset index
image = image.reset_index()
image = image.drop(['index'], axis=1)
image['number'] = image['post_parent']
image = image.set_index('number')
image = image.loc[th_id]
Explanation: Some post_parent doesn't appear in dictionary. Pick only the one we have in Thai ID.
End of explanation
#Check for nan value.
image['post_parent'].isnull().sum()
#Drop rows that have nan English ID.
image = image[np.isfinite(image['post_parent'])]
Explanation: Check for nan value and drop rows that have nan English ID.
End of explanation
dic = {}
for i in range(len(old)):
dic[old[i]] = new[i]
Explanation: Create a dictionary.
End of explanation
image['post_parent'] = [dic[x] for x in image['post_parent']]
Explanation: Translate old to new "ID"
End of explanation
max_id = eng_des['ID'].max()
#Create Series of number as a new "ID".
start = max_id + 1
end = start + image.shape[0]
number = pd.Series(range(start, end))
#Reset eng_des index
image = image.reset_index()
#Drop old index column.
image = image.drop(['number'], axis=1)
image['ID'] = number
#Create Series od number as a new index.
max_index = eng_des.index.values.max()
index_start = max_index + 1
index_end = index_start + image.shape[0]
number = pd.Series(range(index_start, index_end))
image['number'] = number
image = image.set_index(number)
#Drop unused column.
image = image.drop(['number'], axis=1)
Explanation: Set new ID and index number.
End of explanation
wp_posts_with_eng = pd.concat([wp_posts, eng_des, image], axis=0)
Explanation: Concatenate to "wp_posts"
Concatenate "wp_posts" and "eng_des" to the old wp_posts.
End of explanation
wp_posts_with_eng = wp_posts_with_eng.sort_values('ID')
Explanation: Arrange index.
End of explanation
wp_posts_with_eng.to_csv('product_import_to_woo/wp_posts_with_eng.csv', encoding='utf-8', index=False)
Explanation: Export new raw_product and upload to Woocommerce.
End of explanation
wp_postmeta = pd.read_csv('sql_prestashop/wp_postmeta.csv', index_col=False)
meta = pd.read_csv('sql_prestashop/wp_postmeta.csv', index_col=False)
Explanation: Creating the "wp_postmeta"
Our process here is to create "wp_postmeta". "wp_post_meta" is a table collecting the detail of each post. In example "wp_postmeta" of product stores a SKU, price, weight and other information about product. Woocommerce isn't working properly without "wp_postmeta", so after we have already create "wp_posts", we have to generate "wp_postmeta".
We load "wp_postmeta.csv" that is the detail of each wp_post ID. We load wp_postmeta into 2 variable.
1) "wp_postmeta" is use in the end to concatenate with a new English product's "wp_postmeta" or "meta" variable.
2) "meta" is a new English product's "wp_postmeta"
End of explanation
meta['number'] = meta['post_id']
meta = meta.set_index('number')
meta = meta.loc[th_id]
Explanation: "meta" still have other posts. We need to extract only product, so we bring "meta" with the same ID as Thai products.
End of explanation
meta['post_id'] = [dic[x] for x in meta['post_id']]
Explanation: Translate old to new "ID".
End of explanation
max_id = wp_postmeta['meta_id'].max()
#Create Series of number as a new "ID".
start = max_id + 1
end = start + meta.shape[0]
number = pd.Series(range(start, end))
#Reset eng_des index
meta = meta.reset_index()
#Drop old index column.
meta = meta.drop(['number'], axis=1)
meta['meta_id'] = number
#Set new index
#Create Series od number as a new index.
max_index = wp_postmeta.index.values.max()
index_start = max_index + 1
index_end = index_start + meta.shape[0]
number = pd.Series(range(index_start, index_end))
meta['number'] = number
meta = meta.set_index(number)
#Drop unused column.
meta = meta.drop(['number'], axis=1)
Explanation: Set new ID and index number.
End of explanation
wp_postmeta_with_eng = pd.concat([wp_postmeta, meta], axis=0)
Explanation: Concatenate with the old wp_posts.
End of explanation
wp_postmeta_with_eng = wp_postmeta_with_eng.sort_values('post_id')
Explanation: Arrange index.
End of explanation
#Change SKU format.
for i in range(wp_postmeta_with_eng.shape[0]):
if wp_postmeta_with_eng['meta_key'].iloc[i] == '_sku':
wp_postmeta_with_eng['meta_value'].iloc[i] = 'A' + str(wp_postmeta_with_eng['meta_value'].iloc[i]).zfill(5)
x = wp_postmeta_with_eng['meta_value'].iloc[i]
Explanation: For a comfortable sorting the product in backend, We change the "SKU" into new format.
End of explanation
wp_postmeta_with_eng.to_csv('product_import_to_woo/wp_postmeta_with_eng.csv', encoding='utf-8', index=False)
Explanation: Export to .csv
End of explanation
translations = pd.read_csv('sql_prestashop/wp_icl_translations.csv', index_col=False)
Explanation: Creating the "wp_icl_translations"
Loading the "wp_icl_translations". Use wp_icl_translations to link Thai & Eng product description.
End of explanation
thai_des = wp_posts[wp_posts['post_type'] == 'product']
thai_des = thai_des[['ID', 'post_title']]
eng_des = eng_des[['ID', 'post_title']]
tr = translations[translations['element_type'] == 'post_product']
Explanation: Merging Thai & English Description to wp_icl_translations to make sure we match the correct ID.
End of explanation
#Change column name for merging dataframe.
tr = tr.rename(columns = {'element_id':'ID'})
#Merge thai_des first. The result dataframe will have new "post_title" column.
tr = pd.merge(tr, thai_des, how='left', on='ID')
Explanation: First, we merge Thai product to "wp_icl_translations".
End of explanation
#Merge English description. Now we use "post_title" as a key.
tr = pd.merge(tr, eng_des, how='left', on='post_title')
#Rename ID_x and ID_y for comfortable use.
tr = tr.rename(columns = {'ID_x':'ID_th', 'ID_y':'ID_en'})
Explanation: Then we merge it with English product description by "post_title". The reason we use "post_title" as a key because Thai and English product don't share the same ID.
End of explanation
tr = tr[np.isfinite(tr['ID_en'])]
Explanation: Drop rows that English ID has a nan value.
End of explanation
#Check for duplicate entry.
count = tr['ID_en'].value_counts()
#Drop duplicate name of product.
tr = tr.drop_duplicates(subset='ID_en')
Explanation: Using the value_counts() to find duplicate values. If there are duplicate values, we drop it.
End of explanation
#Find max "ID" of wp_icl_translations. Use max ID + 1 to be the starting ID of English description.
max_id = translations['translation_id'].max()
#Create Series of number as a new "ID".
start = max_id + 1
end = start + tr.shape[0]
number = pd.Series(range(start, end))
Explanation: Create a series with the size of English products.
End of explanation
#Create new dataframe in wp_icl_translations form. Then collect the processed
#English id's wp_icl_translations data.
dataframe = pd.DataFrame({
'translation_id': number,
'element_type' : 'post_product',
'element_id': tr['ID_en'].values,
'trid': tr['trid'].values,
'language_code': 'en',
'source_language_code': 'th'},
columns=['translation_id', 'element_type', 'element_id', 'trid', 'language_code', 'source_language_code'])
Explanation: Creating a new dataframe in the format of wp_icl_translations.
End of explanation
#Set new index, so it will continue from the last index in wp_icl_translations.
start = translations.shape[0]
end = start + dataframe.shape[0]
number = pd.Series(range(start,end))
dataframe = dataframe.set_index(number)
Explanation: Set new index for dataframe.
End of explanation
wp_icl_translations = pd.concat([translations, dataframe])
Explanation: Concatenating dataframe to "translations" and name it as wp_icl_translations
End of explanation
wp_icl_translations["source_language_code"] = wp_icl_translations["source_language_code"].fillna('NULL')
Explanation: "source_language_code" must fill with "NULL" in order to make Woocommerce working properly.
End of explanation
wp_icl_translations.to_csv('product_import_to_woo/wp_icl_translations_to_import.csv', encoding='utf-8', index=False)
Explanation: Export to .csv
End of explanation |
725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
오류 및 예외 처리
개요
코딩할 때 발생할 수 있는 다양한 오류 살펴 보기
오류 메시지 정보 확인 방법
예외 처리, 즉 오류가 발생할 수 있는 예외적인 상황을 미리 고려하는 방법 소개
오늘의 주요 예제
아래 코드는 input() 함수를 이용하여 사용자로부터 숫자를 입력받아
그 숫자의 제곱을 리턴하는 내용을 담고 있다.
코드를 실행하면 숫자를 입력하라는 창이 나오며,
여기에 숫자 3을 입력하면 정상적으로 작동한다.
하지만, 예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
Step1: 위 코드는 정수들의 제곱을 계산하는 프로그램이다.
하지만 사용자가 경우에 따라 정수 이외의 값을 입력하면 시스템이 다운된다.
이에 대한 해결책을 다루고자 한다.
오류 예제
먼저 오류의 다양한 예제를 살펴보자.
다음 코드들은 모두 오류를 발생시킨다.
예제
Step2: 오류를 확인하는 메시지가 처음 볼 때는 매우 생소하다.
위 오류 메시지를 간단하게 살펴보면 다음과 같다.
File "<ipython-input-3-a6097ed4dc2e>", line 1
1번 줄에서 오류 발생
sentence = 'I am a sentence
^
오류 발생 위치 명시
SyntaxError
Step3: 오류의 종류
앞서 예제들을 통해 살펴 보았듯이 다양한 종류의 오류가 발생하며,
코드가 길어지거나 복잡해지면 오류가 발생할 가능성은 점차 커진다.
오류의 종류를 파악하면 어디서 왜 오류가 발생하였는지를 보다 쉽게 파악하여
코드를 수정할 수 있게 된다.
따라서 코드의 발생원인을 바로 알아낼 수 있어야 하며 이를 위해서는 오류 메시지를
제대로 확인할 수 있어야 한다.
하지만 여기서는 언급된 예제 정도의 수준만 다루고 넘어간다.
코딩을 하다 보면 어차피 다양한 오류와 마주치게 될 텐데 그때마다
스스로 오류의 내용과 원인을 확인해 나가는 과정을 통해
보다 많은 경험을 쌓는 길 외에는 달리 방법이 없다.
예외 처리
코드에 문법 오류가 포함되어 있는 경우 아예 실행되지 않는다.
그렇지 않은 경우에는 일단 실행이 되고 중간에 오류가 발생하면 바로 멈춰버린다.
이렇게 중간에 오류가 발생할 수 있는 경우를 미리 생각하여 대비하는 과정을
예외 처리(exception handling)라고 부른다.
예를 들어, 오류가 발생하더라도 오류발생 이전까지 생성된 정보들을 저장하거나, 오류발생 이유를 좀 더 자세히 다루거나, 아니면 오류발생에 대한 보다 자세한 정보를 사용자에게 알려주기 위해 예외 처리를 사용한다.
사용방식은 다음과 같다.
python
try
Step4: 3.2를 입력했을 때 오류가 발생하는 이유는 int() 함수가 정수 모양의 문자열만
처리할 수 있기 때문이다.
사실 정수들의 제곱을 계산하는 프로그램을 작성하였지만 경우에 따라
정수 이외의 값을 입력하는 경우가 발생하게 되며, 이런 경우를 대비해야 한다.
즉, 오류가 발생할 것을 미리 예상해야 하며, 어떻게 대처해야 할지 준비해야 하는데,
try ... except ...문을 이용하여 예외를 처리하는 방식을 활용할 수 있다.
Step5: 올바른 값이 들어올 때까지 입력을 요구할 수 있다.
Step6: 오류 종류에 맞추어 다양한 대처를 하기 위해서는 오류의 종류를 명시하여 예외처리를 하면 된다.
아래 코드는 입력 갑에 따라 다른 오류가 발생하고 그에 상응하는 방식으로 예외처리를 실행한다.
값 오류(ValueError)의 경우
Step7: 0으로 나누기 오류(ZeroDivisionError)의 경우
Step8: 주의
Step10: raise 함수
강제로 오류를 발생시키고자 하는 경우에 사용한다.
예제
어떤 함수를 정확히 정의하지 않은 상태에서 다른 중요한 일을 먼저 처리하고자 할 때
아래와 같이 함수를 선언하고 넘어갈 수 있다.
그런데 아래 함수를 제대로 선언하지 않은 채로 다른 곳에서 호출하면
"아직 정의되어 있지 않음"
이란 메시지로 정보를 알려주게 된다.
Step12: 주의
Step14: 코드의 안전성 문제
문법 오류 또는 실행 중에 오류가 발생하지 않는다 하더라도 코드의 안전성이 보장되지는 않는다.
코드의 안정성이라 함은 코드를 실행할 때 기대하는 결과가 산출된다는 것을 보장한다는 의미이다.
예제
아래 코드는 숫자의 제곱을 리턴하는 square() 함수를 제대로 구현하지 못한 경우를 다룬다.
Step15: 위 함수를 아래와 같이 호출하면 오류가 전혀 발생하지 않지만,
엉뚱한 값을 리턴한다.
Step16: 주의
Step17: 오류에 대한 보다 자세한 정보
파이썬에서 다루는 오류에 대한 보다 자세한 정보는 아래 사이트들에 상세하게 안내되어 있다.
파이썬 기본 내장 오류 정보 문서
Step18: 아래 내용이 충족되도록 위 코드를 수정하라.
나눗셈이 부동소수점으로 계산되도록 한다.
0이 아닌 숫자가 입력될 경우 100을 그 숫자로 나눈다.
0이 입력될 경우 0이 아닌 숫자를 입력하라고 전달한다.
숫자가 아닌 값이 입력될 경우 숫자를 입력하라고 전달한다.
견본답안
Step19: 연습
두 개의 정수 a와 b를 입력 받아 a/b를 계산하여 출력하는 코드를 작성하라.
견본답안 1
Step20: 견본답안 2
Step21: 연습
키와 몸무게를 인자로 받아 체질량지수(BMI)를 구하는 코드를 작성하라.
아래 사항들을 참고한다.
$$BMI = \frac{weight}{height^2}$$
단위 | Python Code:
input_number = input("A number please: ")
number = int(input_number)
print("제곱의 결과는", number**2, "입니다.")
input_number = input("A number please: ")
number = int(input_number)
print("제곱의 결과는", number**2, "입니다.")
Explanation: 오류 및 예외 처리
개요
코딩할 때 발생할 수 있는 다양한 오류 살펴 보기
오류 메시지 정보 확인 방법
예외 처리, 즉 오류가 발생할 수 있는 예외적인 상황을 미리 고려하는 방법 소개
오늘의 주요 예제
아래 코드는 input() 함수를 이용하여 사용자로부터 숫자를 입력받아
그 숫자의 제곱을 리턴하는 내용을 담고 있다.
코드를 실행하면 숫자를 입력하라는 창이 나오며,
여기에 숫자 3을 입력하면 정상적으로 작동한다.
하지만, 예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
End of explanation
sentence = 'I am a sentence
Explanation: 위 코드는 정수들의 제곱을 계산하는 프로그램이다.
하지만 사용자가 경우에 따라 정수 이외의 값을 입력하면 시스템이 다운된다.
이에 대한 해결책을 다루고자 한다.
오류 예제
먼저 오류의 다양한 예제를 살펴보자.
다음 코드들은 모두 오류를 발생시킨다.
예제: 0으로 나누기 오류
python
4.6/0
오류 설명: 0으로 나눌 수 없다.
예제: 문법 오류
python
sentence = 'I am a sentence
오류 설명: 문자열 양 끝의 따옴표가 짝이 맞아야 한다.
* 작은 따옴표끼리 또는 큰 따옴표끼리
예제: 들여쓰기 문법 오류
python
for i in range(3):
j = i * 2
print(i, j)
오류 설명: 2번 줄과 3번 줄의 들여쓰기 정도가 동일해야 한다.
예제: 자료형 오류
아래 연산은 모두 오류를 발생시킨다.
```python
new_string = 'cat' - 'dog'
new_string = 'cat' * 'dog'
new_string = 'cat' / 'dog'
new_string = 'cat' + 3
new_string = 'cat' - 3
new_string = 'cat' / 3
```
이유: 문자열 끼리의 합, 문자열과 정수의 곱셈만 정의되어 있다.
예제: 이름 오류
python
print(party)
오류 설명: 미리 선언된 변수만 사용할 수 있다.
예제: 인덱스 오류
python
a_string = 'abcdefg'
a_string[12]
오류 설명: 인덱스는 문자열의 길이보다 작은 수만 사용할 수 있다.
예제: 값 오류
python
int(a_string)
오류 설명: int() 함수는 정수로만 구성된 문자열만 처리할 수 있다.
예제: 속성 오류
python
print(a_string.len())
오류 설명: 문자열 자료형에는 len() 메소드가 존재하지 않는다.
주의: len() 이라는 함수는 문자열의 길이를 확인하지만 문자열 메소드는 아니다.
이후에 다룰 리스트, 튜플 등에 대해서도 사용할 수 있는 함수이다.
오류 확인
앞서 언급한 코드들을 실행하면 오류가 발생하고 어디서 어떤 오류가 발생하였는가에 대한 정보를
파이썬 해석기가 바로 알려 준다.
예제
End of explanation
a = 0
4/a
Explanation: 오류를 확인하는 메시지가 처음 볼 때는 매우 생소하다.
위 오류 메시지를 간단하게 살펴보면 다음과 같다.
File "<ipython-input-3-a6097ed4dc2e>", line 1
1번 줄에서 오류 발생
sentence = 'I am a sentence
^
오류 발생 위치 명시
SyntaxError: EOL while scanning string literal
오류 종류 표시: 문법 오류(SyntaxError)
예제
아래 예제는 0으로 나눌 때 발생하는 오류를 나타낸다.
오류에 대한 정보를 잘 살펴보면서 어떤 내용을 담고 있는지 확인해 보아야 한다.
End of explanation
number_to_square = input("정수를 입력하세요: ")
# number_to_square 변수의 자료형이 문자열(str)임에 주의하라.
# 따라서 연산을 하고 싶으면 정수형(int)으로 형변환을 먼저 해야 한다.
number = int(number_to_square)
print("제곱의 결과는", number**2, "입니다.")
number_to_square = input("정수를 입력하세요: ")
# number_to_square 변수의 자료형이 문자열(str)임에 주의하라.
# 따라서 연산을 하고 싶으면 정수형(int)으로 형변환을 먼저 해야 한다.
number = int(number_to_square)
print("제곱의 결과는", number**2, "입니다.")
Explanation: 오류의 종류
앞서 예제들을 통해 살펴 보았듯이 다양한 종류의 오류가 발생하며,
코드가 길어지거나 복잡해지면 오류가 발생할 가능성은 점차 커진다.
오류의 종류를 파악하면 어디서 왜 오류가 발생하였는지를 보다 쉽게 파악하여
코드를 수정할 수 있게 된다.
따라서 코드의 발생원인을 바로 알아낼 수 있어야 하며 이를 위해서는 오류 메시지를
제대로 확인할 수 있어야 한다.
하지만 여기서는 언급된 예제 정도의 수준만 다루고 넘어간다.
코딩을 하다 보면 어차피 다양한 오류와 마주치게 될 텐데 그때마다
스스로 오류의 내용과 원인을 확인해 나가는 과정을 통해
보다 많은 경험을 쌓는 길 외에는 달리 방법이 없다.
예외 처리
코드에 문법 오류가 포함되어 있는 경우 아예 실행되지 않는다.
그렇지 않은 경우에는 일단 실행이 되고 중간에 오류가 발생하면 바로 멈춰버린다.
이렇게 중간에 오류가 발생할 수 있는 경우를 미리 생각하여 대비하는 과정을
예외 처리(exception handling)라고 부른다.
예를 들어, 오류가 발생하더라도 오류발생 이전까지 생성된 정보들을 저장하거나, 오류발생 이유를 좀 더 자세히 다루거나, 아니면 오류발생에 대한 보다 자세한 정보를 사용자에게 알려주기 위해 예외 처리를 사용한다.
사용방식은 다음과 같다.
python
try:
코드1
except:
코드2
* 먼저 코드1 부분을 실행한다.
* 코드1 부분이 실행되면서 오류가 발생하지 않으면 코드2 부분은 무시하고 다음으로 넘어간다.
* 코드1 부분이 실행되면서 오류가 발생하면 더이상 진행하지 않고 바로 코드2 부분을 실행한다.
예제
아래 코드는 input() 함수를 이용하여 사용자로부터 숫자를 입력받아 그 숫자의 제곱을 리턴하고자 하는 내용을 담고 있으며, 코드에는 문법적 오류가 없다.
그리고 코드를 실행하면 숫자를 입력하라는 창이 나온다.
여기에 숫자 3을 입력하면 정상적으로 작동하지만
예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
End of explanation
number_to_square = input("정수를 입력하세요: ")
try:
number = int(number_to_square)
print("제곱의 결과는", number ** 2, "입니다.")
except:
print("정수를 입력해야 합니다.")
Explanation: 3.2를 입력했을 때 오류가 발생하는 이유는 int() 함수가 정수 모양의 문자열만
처리할 수 있기 때문이다.
사실 정수들의 제곱을 계산하는 프로그램을 작성하였지만 경우에 따라
정수 이외의 값을 입력하는 경우가 발생하게 되며, 이런 경우를 대비해야 한다.
즉, 오류가 발생할 것을 미리 예상해야 하며, 어떻게 대처해야 할지 준비해야 하는데,
try ... except ...문을 이용하여 예외를 처리하는 방식을 활용할 수 있다.
End of explanation
while True:
try:
number = int(input("정수를 입력하세요: "))
print("제곱의 결과는", number**2, "입니다.")
break
except:
print("정수를 입력해야 합니다.")
Explanation: 올바른 값이 들어올 때까지 입력을 요구할 수 있다.
End of explanation
number_to_square = input("정수를 입력하세요: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("결과는", a, "입니다.")
except ValueError:
print("정수를 입력해야 합니다.")
except ZeroDivisionError:
print("4는 빼고 하세요.")
Explanation: 오류 종류에 맞추어 다양한 대처를 하기 위해서는 오류의 종류를 명시하여 예외처리를 하면 된다.
아래 코드는 입력 갑에 따라 다른 오류가 발생하고 그에 상응하는 방식으로 예외처리를 실행한다.
값 오류(ValueError)의 경우
End of explanation
number_to_square = input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("결과는", a, "입니다.")
except ValueError:
print("정수를 입력해야 합니다.")
except ZeroDivisionError:
print("4는 빼고 하세요.")
Explanation: 0으로 나누기 오류(ZeroDivisionError)의 경우
End of explanation
try:
a = 1/0
except ValueError:
print("This program stops here.")
Explanation: 주의: 이와 같이 발생할 수 예외를 가능한 한 모두 염두하는 프로그램을 구현해야 하는 일은
매우 어려운 일이다.
앞서 보았듯이 오류의 종류를 정확히 알 필요가 발생한다.
다음 예제에서 보듯이 오류의 종류를 틀리게 명시하면 예외 처리가 제대로 작동하지 않는다.
End of explanation
def to_define():
아주 복잡하지만 지금 당장 불필요
raise NotImplementedError("아직 정의되어 있지 않음")
print(to_define())
Explanation: raise 함수
강제로 오류를 발생시키고자 하는 경우에 사용한다.
예제
어떤 함수를 정확히 정의하지 않은 상태에서 다른 중요한 일을 먼저 처리하고자 할 때
아래와 같이 함수를 선언하고 넘어갈 수 있다.
그런데 아래 함수를 제대로 선언하지 않은 채로 다른 곳에서 호출하면
"아직 정의되어 있지 않음"
이란 메시지로 정보를 알려주게 된다.
End of explanation
def to_define1():
아주 복잡하지만 지금 당장 불필요
print(to_define1())
Explanation: 주의: 오류 처리를 사용하지 않으면 오류 메시지가 보이지 않을 수도 있음에 주의해야 한다.
End of explanation
def square(number):
정수를 인자로 입력 받아 제곱을 리턴한다.
square_of_number = number * 2
return square_of_number
Explanation: 코드의 안전성 문제
문법 오류 또는 실행 중에 오류가 발생하지 않는다 하더라도 코드의 안전성이 보장되지는 않는다.
코드의 안정성이라 함은 코드를 실행할 때 기대하는 결과가 산출된다는 것을 보장한다는 의미이다.
예제
아래 코드는 숫자의 제곱을 리턴하는 square() 함수를 제대로 구현하지 못한 경우를 다룬다.
End of explanation
square(3)
Explanation: 위 함수를 아래와 같이 호출하면 오류가 전혀 발생하지 않지만,
엉뚱한 값을 리턴한다.
End of explanation
help(square)
Explanation: 주의: help() 를 이용하여 어떤 함수가 무슨 일을 하는지 내용을 확인할 수 있다.
단, 함수를 정의할 때 함께 적힌 문서화 문자열(docstring) 내용이 확인된다.
따라서, 함수를 정의할 때 문서화 문자열에 가능한 유효한 정보를 입력해 두어야 한다.
End of explanation
number_to_square = input("100을 나눌 숫자를 입력하세요: ")
number = int(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
Explanation: 오류에 대한 보다 자세한 정보
파이썬에서 다루는 오류에 대한 보다 자세한 정보는 아래 사이트들에 상세하게 안내되어 있다.
파이썬 기본 내장 오류 정보 문서:
https://docs.python.org/3.4/library/exceptions.html
파이썬 예외처리 정보 문서:
https://docs.python.org/3.4/tutorial/errors.html
연습문제
연습
아래 코드는 100을 입력한 값으로 나누는 함수이다.
다만 0을 입력할 경우 0으로 나누기 오류(ZeroDivisionError)가 발생한다.
End of explanation
number_to_square = input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
except ZeroDivisionError:
raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')
except ValueError:
raise ValueError('숫자를 입력하세요.')
number_to_square = input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
except ZeroDivisionError:
raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')
except ValueError:
raise ValueError('숫자를 입력하세요.')
Explanation: 아래 내용이 충족되도록 위 코드를 수정하라.
나눗셈이 부동소수점으로 계산되도록 한다.
0이 아닌 숫자가 입력될 경우 100을 그 숫자로 나눈다.
0이 입력될 경우 0이 아닌 숫자를 입력하라고 전달한다.
숫자가 아닌 값이 입력될 경우 숫자를 입력하라고 전달한다.
견본답안:
End of explanation
while True:
try:
a, b = input("정수 두 개를 입력하세요. 쉼표를 사용해야 합니다.\n").split(',')
a, b = int(a), int(b)
print("계산의 결과는", a/b, "입니다.")
break
except ValueError:
print("정수 두 개를 쉼표로 구분해서 입력해야 합니다.\n")
except ZeroDivisionError:
print("둘째 수는 0이 아니어야 합니다.\n")
Explanation: 연습
두 개의 정수 a와 b를 입력 받아 a/b를 계산하여 출력하는 코드를 작성하라.
견본답안 1:
End of explanation
while True:
try:
a, b = map(int, input("정수 두 개를 입력하세요. 쉼표를 사용해야 합니다.\n").split(','))
print("계산의 결과는", a/b, "입니다.")
break
except ValueError:
print("정수 두 개를 쉼표로 구분해서 입력해야 합니다.\n")
except ZeroDivisionError:
print("둘째 수는 0이 아니어야 합니다.\n")
Explanation: 견본답안 2: map 함수를 활용하여 a, b 각각에 int 함수를 자동으로 적용할 수 있다.
map 함수에 대한 설명은 여기를 참조하면 된다.
End of explanation
while True:
try:
print("키와 몸무게를 입력하세요: ")
a, b = map(float, input().split(", "))
BMI = b/(a**2)
if BMI <= 18.5:
print("BMI는", BMI, "입니다. 저체중입니다.")
elif 18.5 < BMI <= 23:
print("BMI는", BMI, "입니다. 정상 체중입니다.")
elif 23 < BMI <= 25:
print("BMI는", BMI, "입니다. 비만입니다.")
elif 25 < BMI <= 30:
print("BMI는", BMI, "입니다. 과체중입니다.")
else:
print("BMI는", BMI, "입니다. 고도비만입니다.")
break
except ValueError:
print("숫자를 입력하세요.")
except ZeroDivisionError:
print("0이 아닌 숫자를 입력하세요.")
Explanation: 연습
키와 몸무게를 인자로 받아 체질량지수(BMI)를 구하는 코드를 작성하라.
아래 사항들을 참고한다.
$$BMI = \frac{weight}{height^2}$$
단위:
몸무게(weight): kg
키(height): m
BMI 수치에 따른 체중 분류
BMI <= 18.5이면 저체중
18.5 < BMI <= 23이면 정상
23 < BMI <= 25이면 과체중
25 < BMI <= 30이면 비만
BMI > 30이면 고도비만
견본답안:
End of explanation |
726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing TFNoiseAwareModel
We'll start by testing the textRNN model on a categorical problem from tutorials/crowdsourcing. In particular we'll test for (a) basic performance and (b) proper construction / re-construction of the TF computation graph both after (i) repeated notebook calls, and (ii) with GridSearch in particular.
Step1: Load candidates and training marginals
Step2: Train LogisticRegression
Step3: Train SparseLogisticRegression
Note
Step4: Train basic LSTM
With dev set scoring during execution (note we use test set here to be simple)
Step5: Run GridSearch
Step6: Reload saved model outside of GridSearch
Step7: Reload a model with different structure
Step8: Testing GenerativeModel
Testing GridSearch on crowdsourcing data | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ['SNORKELDB'] = 'sqlite:///{0}{1}crowdsourcing.db'.format(os.getcwd(), os.sep)
from snorkel import SnorkelSession
session = SnorkelSession()
Explanation: Testing TFNoiseAwareModel
We'll start by testing the textRNN model on a categorical problem from tutorials/crowdsourcing. In particular we'll test for (a) basic performance and (b) proper construction / re-construction of the TF computation graph both after (i) repeated notebook calls, and (ii) with GridSearch in particular.
End of explanation
from snorkel.models import candidate_subclass
from snorkel.contrib.models.text import RawText
Tweet = candidate_subclass('Tweet', ['tweet'], cardinality=5)
train_tweets = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()
len(train_tweets)
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, train_tweets, split=0)
train_marginals.shape
Explanation: Load candidates and training marginals
End of explanation
# Simple unigram featurizer
def get_unigram_tweet_features(c):
for w in c.tweet.text.split():
yield w, 1
# Construct feature matrix
from snorkel.annotations import FeatureAnnotator
featurizer = FeatureAnnotator(f=get_unigram_tweet_features)
%time F_train = featurizer.apply(split=0)
F_train
%time F_test = featurizer.apply_existing(split=1)
F_test
from snorkel.learning.tensorflow import LogisticRegression
model = LogisticRegression(cardinality=Tweet.cardinality)
model.train(F_train.todense(), train_marginals)
Explanation: Train LogisticRegression
End of explanation
from snorkel.learning.tensorflow import SparseLogisticRegression
model = SparseLogisticRegression(cardinality=Tweet.cardinality)
model.train(F_train, train_marginals, n_epochs=50, print_freq=10)
import numpy as np
test_labels = np.load('crowdsourcing_test_labels.npy')
acc = model.score(F_test, test_labels)
print(acc)
assert acc > 0.6
# Test with batch size s.t. N % batch_size == 1...
model.score(F_test, test_labels, batch_size=9)
Explanation: Train SparseLogisticRegression
Note: Testing doesn't currently work with LogisticRegression above, but no real reason to use that over this...
End of explanation
from snorkel.learning.tensorflow import TextRNN
test_tweets = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()
train_kwargs = {
'dim': 100,
'lr': 0.001,
'n_epochs': 25,
'dropout': 0.2,
'print_freq': 5
}
lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)
lstm.train(train_tweets, train_marginals, X_dev=test_tweets, Y_dev=test_labels, **train_kwargs)
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
# Test with batch size s.t. N % batch_size == 1...
lstm.score(test_tweets, test_labels, batch_size=9)
Explanation: Train basic LSTM
With dev set scoring during execution (note we use test set here to be simple)
End of explanation
from snorkel.learning.utils import GridSearch
# Searching over learning rate
param_ranges = {'lr': [1e-3, 1e-4], 'dim': [50, 100]}
model_class_params = {'seed' : 123, 'cardinality': Tweet.cardinality}
model_hyperparams = {
'dim': 100,
'n_epochs': 20,
'dropout': 0.1,
'print_freq': 10
}
searcher = GridSearch(TextRNN, param_ranges, train_tweets, train_marginals,
model_class_params=model_class_params,
model_hyperparams=model_hyperparams)
# Use test set here (just for testing)
lstm, run_stats = searcher.fit(test_tweets, test_labels)
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
Explanation: Run GridSearch
End of explanation
lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)
lstm.load('TextRNN_best', save_dir='checkpoints/grid_search')
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
Explanation: Reload saved model outside of GridSearch
End of explanation
lstm.load('TextRNN_0', save_dir='checkpoints/grid_search')
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc < 0.60
Explanation: Reload a model with different structure
End of explanation
from snorkel.annotations import load_label_matrix
import numpy as np
L_train = load_label_matrix(session, split=0)
train_labels = np.load('crowdsourcing_train_labels.npy')
from snorkel.learning import GenerativeModel
# Searching over learning rate
searcher = GridSearch(GenerativeModel, {'epochs': [0, 10, 30]}, L_train)
# Use training set labels here (just for testing)
gen_model, run_stats = searcher.fit(L_train, train_labels)
acc = gen_model.score(L_train, train_labels)
print(acc)
assert acc > 0.97
Explanation: Testing GenerativeModel
Testing GridSearch on crowdsourcing data
End of explanation |
727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear control flow between a series of coroutines is easy to manage with the built-in language keyword await. More complicated structures allowing one coroutine to wait for several others to complete in parallel are also possible using tools in asyncio.
Waiting for Multiple Coroutines
It is often useful to divide one operation into many parts and execute them separately. For example, downloading several remote resources or querying remote APIs. In situations where the order of execution doesn’t matter, and where there may be an arbitrary number of operations, wait() can be used to pause one coroutine until the other background operations complete.
Step1: Internally, wait() uses a set to hold the Task instances it creates. This results in them starting, and finishing, in an unpredictable order. The return value from wait() is a tuple containing two sets holding the finished and pending tasks.
There will only be pending operations left if wait() is used with a timeout value.
Step2: Those remaining background operations should either be cancelled or finished by waiting for them. Leaving them pending while the event loop continues will let them execute further, which may not be desirable if the overall operation is considered aborted. Leaving them pending at the end of the process will result in warnings being reported.
Step3: Gathering Results from Coroutines
If the background phases are well-defined, and only the results of those phases matter, then gather() may be more useful for waiting for multiple operations.
Step4: The tasks created by gather are not exposed, so they cannot be cancelled. The return value is a list of results in the same order as the arguments passed to gather(), regardless of the order the background operations actually completed.
Step5: Handling Background Operations as They Finish
as_completed() is a generator that manages the execution of a list of coroutines given to it and produces their results one at a time as they finish running. As with wait(), order is not guaranteed by as_completed(), but it is not necessary to wait for all of the background operations to complete before taking other action.
Step6: This example starts several background phases that finish in the reverse order from which they start. As the generator is consumed, the loop waits for the result of the coroutine using await. | Python Code:
# %load asyncio_wait.py
import asyncio
async def phase(i):
print('in phase {}'.format(i))
await asyncio.sleep(0.1 * i)
print('done with phase {}'.format(i))
return 'phase {} result'.format(i)
async def main(num_phases):
print('starting main')
phases = [
phase(i)
for i in range(num_phases)
]
print('waiting for phases to complete')
completed, pending = await asyncio.wait(phases)
results = [t.result() for t in completed]
print('results: {!r}'.format(results))
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(main(3))
finally:
event_loop.close()
!python asyncio_wait.py
Explanation: Linear control flow between a series of coroutines is easy to manage with the built-in language keyword await. More complicated structures allowing one coroutine to wait for several others to complete in parallel are also possible using tools in asyncio.
Waiting for Multiple Coroutines
It is often useful to divide one operation into many parts and execute them separately. For example, downloading several remote resources or querying remote APIs. In situations where the order of execution doesn’t matter, and where there may be an arbitrary number of operations, wait() can be used to pause one coroutine until the other background operations complete.
End of explanation
# %load asyncio_wait_timeout.py
import asyncio
async def phase(i):
print('in phase {}'.format(i))
try:
await asyncio.sleep(0.1 * i)
except asyncio.CancelledError:
print('phase {} canceled'.format(i))
raise
else:
print('done with phase {}'.format(i))
return 'phase {} result'.format(i)
async def main(num_phases):
print('starting main')
phases = [
phase(i)
for i in range(num_phases)
]
print('waiting 0.1 for phases to complete')
completed, pending = await asyncio.wait(phases, timeout=0.1)
print('{} completed and {} pending'.format(
len(completed), len(pending),
))
# Cancel remaining tasks so they do not generate errors
# as we exit without finishing them.
if pending:
print('canceling tasks')
for t in pending:
t.cancel()
print('exiting main')
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(main(3))
finally:
event_loop.close()
Explanation: Internally, wait() uses a set to hold the Task instances it creates. This results in them starting, and finishing, in an unpredictable order. The return value from wait() is a tuple containing two sets holding the finished and pending tasks.
There will only be pending operations left if wait() is used with a timeout value.
End of explanation
!python asyncio_wait_timeout.py
Explanation: Those remaining background operations should either be cancelled or finished by waiting for them. Leaving them pending while the event loop continues will let them execute further, which may not be desirable if the overall operation is considered aborted. Leaving them pending at the end of the process will result in warnings being reported.
End of explanation
# %load asyncio_gather.py
import asyncio
async def phase1():
print('in phase1')
await asyncio.sleep(2)
print('done with phase1')
return 'phase1 result'
async def phase2():
print('in phase2')
await asyncio.sleep(1)
print('done with phase2')
return 'phase2 result'
async def main():
print('starting main')
print('waiting for phases to complete')
results = await asyncio.gather(
phase1(),
phase2(),
)
print('results: {!r}'.format(results))
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(main())
finally:
event_loop.close()
Explanation: Gathering Results from Coroutines
If the background phases are well-defined, and only the results of those phases matter, then gather() may be more useful for waiting for multiple operations.
End of explanation
!python asyncio_gather.py
Explanation: The tasks created by gather are not exposed, so they cannot be cancelled. The return value is a list of results in the same order as the arguments passed to gather(), regardless of the order the background operations actually completed.
End of explanation
# %load asyncio_as_completed.py
import asyncio
async def phase(i):
print('in phase {}'.format(i))
await asyncio.sleep(0.5 - (0.1 * i))
print('done with phase {}'.format(i))
return 'phase {} result'.format(i)
async def main(num_phases):
print('starting main')
phases = [
phase(i)
for i in range(num_phases)
]
print('waiting for phases to complete')
results = []
for next_to_complete in asyncio.as_completed(phases):
answer = await next_to_complete
print('received answer {!r}'.format(answer))
results.append(answer)
print('results: {!r}'.format(results))
return results
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(main(3))
finally:
event_loop.close()
Explanation: Handling Background Operations as They Finish
as_completed() is a generator that manages the execution of a list of coroutines given to it and produces their results one at a time as they finish running. As with wait(), order is not guaranteed by as_completed(), but it is not necessary to wait for all of the background operations to complete before taking other action.
End of explanation
!python asyncio_as_completed.py
Explanation: This example starts several background phases that finish in the reverse order from which they start. As the generator is consumed, the loop waits for the result of the coroutine using await.
End of explanation |
728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-Dimensional Integration with MCMC
By Megan Bedell (Flatiron Institute)
10 September 2019
Problem 1
Step1: Problem 1a
Plot the data. Let's take a look at what we're working with!
Step2: Problem 1b
Write the sinusoid function that we want to fit and get ready to run MCMC with helper functions.
First let's write a "get_model_predictions" function - this will resemble yesterday's same-named function, but instead of returning a line it should return a sinusoid. I suggest using the following free parameters, although there are a few alternative options that you may use instead
Step3: Write a lnprior function with flat priors on all parameters - again, this will be similar to yesterday's function, but with different values.
Hint
Step4: The following functions can be reused as-is from the previous day's Metropolis-Hastings exercise, so just copy-and-paste or import them
Step5: Problem 1c
Run the MCMC.
Let's start with initialization values.
To save some time, I will assert that if we made a Lomb-Scargle periodogram of the RVs, there would be a peak near period = 3.53 days, so start with that guess and let's figure out what the best values might be for the other parameters.
(If you finish early and are up for a bonus problem, you can double-check my assertion using astropy timeseries!)
Step6: Now run the MCMC for 5000 steps. I'll give you (the diagonal of a) covariance matrix for the multi-dimensional Gaussian proposal function to start with. As you saw yesterday afternoon, this cov parameter sets the step sizes that the M-H algorithm will use when it proposes new values.
Step7: Do a pairs plot for the first two parameters. Does the behavior of this chain seem efficient?
Step8: Problem 1d
There were a couple of issues with the previous MCMC run. Let's start with this one
Step9: Plot the data points and your best-fit model. Does the fit look reasonable? (You may need to zoom into a small time range to tell.)
Step10: Another way to see if we're on the right track is to plot the data phased to the orbital period that we found. Do that and optionally overplot the phased model as well.
Step11: Now re-run the MCMC using these parameters as the initial values and make another pairs plot. Again, I'm going to give you some step size parameters to start with. Because we're now initializing the chain close to the likelihood maximum, we don't want it to move too far away, so I've lowered the values of cov.
Step12: Problem 1e
Now let's tackle another issues
Step13: Writing an autocorrelation function for this purpose actually gets a bit tricky, so we'll use the built-in functionality of emcee.
For the documentation on these functions, check the emcee user guide.
For a more in-depth look at how this is calculated and why it's tricky, check out this tutorial.
Step14: Problem 1f
Change the step size of the MCMC. What does this do to the auto-correlation length? Does this seem better or worse, and why?
Step15: Problem 1g
Using the step sizes and starting conditions that you deem best, run your MCMC for at least 500x the auto-correlation length to get a large number of independent samples. Plot the posterior distribution of radial velocity semi-amplitude K. This parameter is arguably the most important output of an RV fit, because it is a measurement of the mass of the planet.
Step16: From these results, what can we say about the true value of K? What is the probability that K > 84 m/s? 85 m/s? 90 m/s? Are these numbers a reliable estimator of the true probability, in your opinion?
Step17: Challenge Problem 1h
Try some different values of cov[0] (the step size for the orbital period). Make a plot of the acceptance fraction as a function of step size. Does this make sense?
Challenge Problem 1i
For different values of cov[0], plot the correlation length. Does this make sense?
Problem 2
Step18: Problem 2a
Again, let's start by plotting the data. Make plots of the time series and the time series phased to a period of 111.4 days.
Step19: This planet's orbit should look pretty different from a sine wave!
Problem 2b
Remake the get_model_predictions and lnprior functions to fit a Keplerian.
Since this is a bit in the weeds of astronomy for the purposes of this workshop, I've gone ahead and written a solver for Kepler's equation and a get_model_predictions function that will deliver RVs for you. Read over the docstring and use the information given there to write a lnprior function for theta.
Step20: Problem 2c
Play around with the starting parameters until you're convinced that you have a reasonable fit.
Step21: Problem 2d
Run the MCMC for 1000 steps and plot a trace of the eccentricity parameter. How efficiently is it running?
Optional challenge
Step22: Problem 2e
Make a corner plot of the results. Which parameters seem most correlated? Which are most and least well-constrained by the data?
Step23: Problem 2f
Ford et al. (2006) suggest mitigating this issue by reparameterizing the orbital parameters $e$ and $\omega$ as $e cos\omega$ and $e sin\omega$. Modify the get_model_predictions and lnprior functions accordingly and rerun the MCMC. Does performance improve?
Note | Python Code:
datafile = 'https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0108/0108859/data/UID_0108859_RVC_001.tbl'
data = pd.read_fwf(datafile, header=0, names=['t', 'rv', 'rv_err'], skiprows=22)
data['t'] -= data['t'][0]
Explanation: Multi-Dimensional Integration with MCMC
By Megan Bedell (Flatiron Institute)
10 September 2019
Problem 1: Fitting a Sinusoid to Data
In this example, we will download a time series of radial velocities for the star HD209458. This star hosts a Hot Jupiter exoplanet. In fact, this planet was the first to be seen in transit and was discovered 20 years ago yesterday!
Because the eccentricity is low for this planet, we can fit its orbit in the radial velocities with a relatively simple model: a sinusoid.
Below is a snippet of code that will download the time-series data from NASA Exoplanet Archive:
End of explanation
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel('Time (days)')
ax.set_ylabel(r'RV (m s$^{-1}$)');
Explanation: Problem 1a
Plot the data. Let's take a look at what we're working with!
End of explanation
def get_model_predictions(theta, t):
'''
Calculate RV predictions for parameters theta and timestamps t.
'''
period, amplitude, t0, rv0 = theta
model_preds = # complete
return model_preds
Explanation: Problem 1b
Write the sinusoid function that we want to fit and get ready to run MCMC with helper functions.
First let's write a "get_model_predictions" function - this will resemble yesterday's same-named function, but instead of returning a line it should return a sinusoid. I suggest using the following free parameters, although there are a few alternative options that you may use instead:
theta = [p, # period of the sinusoid
a, # semi-amplitude of the sinusoid
t0, # reference x at which sine phase = 0
rv0] # constant offset in y
The RV prediction is then:
$$RV(t) = a \sin\Big(\frac{2\pi}{p} (t - t_0)\Big) + rv_0$$
End of explanation
def lnprior(theta):
period, amplitude, t0, rv0 = theta
if 0 < period <= 1e4 and # complete
lnp = np.log(1e-4) + # complete
else:
return -np.inf
return lnp
Explanation: Write a lnprior function with flat priors on all parameters - again, this will be similar to yesterday's function, but with different values.
Hint: some of the bounds on these parameters will be physically motivated (i.e. orbital period cannot be negative). For others, you'll need to guess something reasonable but generous - i.e., a Hot Jupiter planet probably does not have an orbital period above a year or so.
End of explanation
def lnlikelihood(theta, y, x, y_unc):
model_preds = get_model_predictions(theta, x)
lnl = -np.sum((y-model_preds)**2/(2*y_unc**2))
return lnl
def lnposterior(theta, y, x, y_unc):
lnp = lnprior(theta)
if not np.isfinite(lnp):
return -np.inf
lnl = lnlikelihood(theta, y, x, y_unc)
lnpost = lnl + lnp
return lnpost
def hastings_ratio(theta_1, theta_0, y, x, y_unc):
lnpost1 = lnposterior(theta_1, y, x, y_unc)
lnpost0 = lnposterior(theta_0, y, x, y_unc)
h_ratio = np.exp(lnpost1 - lnpost0)
return h_ratio
def propose_jump(theta, cov):
if np.shape(theta) == np.shape(cov):
cov = np.diag(np.array(cov)**2)
proposed_position = np.random.multivariate_normal(theta, cov)
return proposed_position
def mh_mcmc(theta_0, cov, nsteps, y, x, y_unc):
positions = np.zeros((nsteps+1, len(theta_0)))
lnpost_at_pos = -np.inf*np.ones(nsteps+1)
acceptance_ratio = np.zeros_like(lnpost_at_pos)
accepted = 0
positions[0] = theta_0
lnpost_at_pos[0] = lnposterior(theta_0, y, x, y_unc)
for step_num in np.arange(1, nsteps+1):
proposal = propose_jump(positions[step_num-1], cov)
H = hastings_ratio(proposal, positions[step_num-1], y, x, y_unc)
R = np.random.uniform()
if H > R:
accepted += 1
positions[step_num] = proposal
lnpost_at_pos[step_num] = lnposterior(proposal, y, x, y_unc)
acceptance_ratio[step_num] = float(accepted)/step_num
else:
positions[step_num] = positions[step_num-1]
lnpost_at_pos[step_num] = lnpost_at_pos[step_num-1]
acceptance_ratio[step_num] = float(accepted)/step_num
return (positions, lnpost_at_pos, acceptance_ratio)
Explanation: The following functions can be reused as-is from the previous day's Metropolis-Hastings exercise, so just copy-and-paste or import them:
lnlikelihood, lnposterior, hastings_ratio, propose_jump, mh_mcmc
End of explanation
theta_0 = [3.53, # complete
Explanation: Problem 1c
Run the MCMC.
Let's start with initialization values.
To save some time, I will assert that if we made a Lomb-Scargle periodogram of the RVs, there would be a peak near period = 3.53 days, so start with that guess and let's figure out what the best values might be for the other parameters.
(If you finish early and are up for a bonus problem, you can double-check my assertion using astropy timeseries!)
End of explanation
cov = [0.01, 1, 0.05, 0.01]
pos, lnpost, acc = mh_mcmc( # complete
Explanation: Now run the MCMC for 5000 steps. I'll give you (the diagonal of a) covariance matrix for the multi-dimensional Gaussian proposal function to start with. As you saw yesterday afternoon, this cov parameter sets the step sizes that the M-H algorithm will use when it proposes new values.
End of explanation
fig, ax = plt.subplots()
ax.plot( # complete
ax.plot(theta_0[0], theta_0[1], '*', ms=30,
mfc='Crimson', mec='0.8', mew=2,
alpha=0.7)
ax.set_xlabel('Period', fontsize=14)
ax.set_ylabel(r'K (m s$^{-1}$)', fontsize=14)
fig.tight_layout()
Explanation: Do a pairs plot for the first two parameters. Does the behavior of this chain seem efficient?
End of explanation
def nll(*par):
#complete
res = minimize(nll, theta_0,
args=(data['rv'], data['t'], data['rv_err']),
method='Powell')
print('Optimizer finished with message "{0}" and \n\
best-fit parameters {1}'.format(res['message'], res['x']))
Explanation: Problem 1d
There were a couple of issues with the previous MCMC run. Let's start with this one: we started the chains running at a place that was not very close to the best-fit solution.
Find a better set of initialization values by optimizing before we run the MCMC.
We'll use scipy.optimize.minimize to get best-fit parameters. Remember that the lnlikelihood function needs to be maximized not minimized, so we'll need a new function that works the same way, but negative.
End of explanation
# complete
plt.xlabel('Time (days)')
plt.ylabel(r'RV (m s$^{-1}$)');
Explanation: Plot the data points and your best-fit model. Does the fit look reasonable? (You may need to zoom into a small time range to tell.)
End of explanation
period, amplitude, t0, rv0 = res['x']
fig, ax = plt.subplots()
phased_t = (data['t'] - t0) % period
# complete
Explanation: Another way to see if we're on the right track is to plot the data phased to the orbital period that we found. Do that and optionally overplot the phased model as well.
End of explanation
theta_bestfit = res['x']
cov = [0.001, 0.1, 0.01, 0.1]
pos, lnpost, acc = mh_mcmc( # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.plot(theta_bestfit[0], theta_bestfit[1], '*', ms=30,
mfc='Crimson', mec='0.8', mew=2,
alpha=0.7)
ax.set_xlabel('Period', fontsize=14)
ax.set_ylabel(r'K (m s$^{-1}$)', fontsize=14)
fig.tight_layout()
Explanation: Now re-run the MCMC using these parameters as the initial values and make another pairs plot. Again, I'm going to give you some step size parameters to start with. Because we're now initializing the chain close to the likelihood maximum, we don't want it to move too far away, so I've lowered the values of cov.
End of explanation
plt.plot( # complete
Explanation: Problem 1e
Now let's tackle another issues: chain efficiency. Calculate the auto-correlation length of your chain.
First, let's just plot the sequence of orbital period values in the chain in a trace plot. From eyeballing this sequence, about how many steps do you think are needed to reach a sample that is independent from the previous one(s)?
End of explanation
acf = emcee.autocorr.function_1d(pos[:,0])
plt.plot(acf)
plt.xlabel('Lag')
plt.ylabel('Normalized ACF');
act = emcee.autocorr.integrated_time(pos[:,0], quiet=True)
print('The integrated autocorrelation time is estimated as: {0}'.format(act))
Explanation: Writing an autocorrelation function for this purpose actually gets a bit tricky, so we'll use the built-in functionality of emcee.
For the documentation on these functions, check the emcee user guide.
For a more in-depth look at how this is calculated and why it's tricky, check out this tutorial.
End of explanation
cov = [0.0001, 0.1, 0.01, 0.1]
pos, lnpost, acc = mh_mcmc( # complete
plt.plot( # complete
acf = # complete
act = # complete
Explanation: Problem 1f
Change the step size of the MCMC. What does this do to the auto-correlation length? Does this seem better or worse, and why?
End of explanation
pos, lnpost, acc = mh_mcmc(# complete
plt.hist( # complete
plt.xlabel(r'K (m s$^{-1}$)');
Explanation: Problem 1g
Using the step sizes and starting conditions that you deem best, run your MCMC for at least 500x the auto-correlation length to get a large number of independent samples. Plot the posterior distribution of radial velocity semi-amplitude K. This parameter is arguably the most important output of an RV fit, because it is a measurement of the mass of the planet.
End of explanation
print('The probability that K > 84 m/s is: {0:.2f}'.format( # complete
print('The probability that K > 85 m/s is: {0:.2f}'.format( # complete
print('The probability that K > 90 m/s is: {0:.2f}'.format( # complete
Explanation: From these results, what can we say about the true value of K? What is the probability that K > 84 m/s? 85 m/s? 90 m/s? Are these numbers a reliable estimator of the true probability, in your opinion?
End of explanation
datafile = 'https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0045/0045982/data/UID_0045982_RVC_006.tbl'
data = pd.read_fwf(datafile, header=0, names=['t', 'rv', 'rv_err'], skiprows=21)
data['t'] -= data['t'][0]
Explanation: Challenge Problem 1h
Try some different values of cov[0] (the step size for the orbital period). Make a plot of the acceptance fraction as a function of step size. Does this make sense?
Challenge Problem 1i
For different values of cov[0], plot the correlation length. Does this make sense?
Problem 2: Fitting a Keplerian to Data
In the previous example, the orbit we were fitting had negligible eccentricity, so we were able to fit it with a sinusoid. In this example, we'll look at the high-eccentricity planet HD 80606b and fit a full Keplerian model to its RV data. This requires introducing some new free parameters to the model, which as we will see are not always straightforward to sample!
End of explanation
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel('Time (days)')
ax.set_ylabel(r'RV (m s$^{-1}$)');
# phased plot goes here
Explanation: Problem 2a
Again, let's start by plotting the data. Make plots of the time series and the time series phased to a period of 111.4 days.
End of explanation
def calc_ea(ma, ecc):
# Kepler solver - calculates eccentric anomaly
tolerance = 1e-3
ea = np.copy(ma)
while True:
diff = ea - ecc * np.sin(ea) - ma
ea -= diff / (1. - ecc * np.cos(ea))
if abs(diff).all() <= tolerance:
break
return ea
def get_model_predictions(theta, t):
'''
Calculate Keplerian orbital RVs
Input
-----
theta : list
A list of values for the following parameters:
Orbital period,
RV semi-amplitude,
eccentricity (between 0-1),
omega (argument of periastron; an angle in radians
denoting the orbital phase where the planet
passes closest to the host star)
Tp (time of periastron; reference timestamp for the above)
RV0 (constant RV offset)
t : list or array
Timestamps at which to calculate the RV
Returns
-------
rvs : list or array
Predicted RVs at the input times.
'''
P, K, ecc, omega, tp, rv0 = theta
ma = 2. * np.pi / P * (t - tp) # mean anomaly
ea = calc_ea(ma, ecc) # eccentric anomaly
f = 2.0 * np.arctan2(np.sqrt(1+ecc)*np.sin(ea/2.0),
np.sqrt(1-ecc)*np.cos(ea/2.0)) # true anomaly
rvs = - K * (np.cos(omega + f) + ecc*np.cos(omega))
return rvs + rv0
def lnprior(theta):
# complete
Explanation: This planet's orbit should look pretty different from a sine wave!
Problem 2b
Remake the get_model_predictions and lnprior functions to fit a Keplerian.
Since this is a bit in the weeds of astronomy for the purposes of this workshop, I've gone ahead and written a solver for Kepler's equation and a get_model_predictions function that will deliver RVs for you. Read over the docstring and use the information given there to write a lnprior function for theta.
End of explanation
theta_0 = # complete
plt.errorbar(data['t'], data['rv'], data['rv_err'],
fmt='o', ms=4)
xs = np.linspace(900, 1050, 1000)
plt.plot(xs, get_model_predictions(theta_0, xs), c='DarkOrange')
plt.xlim([900,1050])
plt.xlabel('Time (days)')
plt.ylabel(r'RV (m s$^{-1}$)');
Explanation: Problem 2c
Play around with the starting parameters until you're convinced that you have a reasonable fit.
End of explanation
cov = [0.1, 100, 0.01, 0.1, 0.1, 100]
pos, lnpost, acc = mh_mcmc( # complete
#complete
plt.ylabel('Eccentricity')
plt.xlabel('Step');
Explanation: Problem 2d
Run the MCMC for 1000 steps and plot a trace of the eccentricity parameter. How efficiently is it running?
Optional challenge: if you wrote a Gibbs sampler yesterday, use that instead of Metropolis-Hastings here!
End of explanation
corner.corner( # complete
Explanation: Problem 2e
Make a corner plot of the results. Which parameters seem most correlated? Which are most and least well-constrained by the data?
End of explanation
def get_model_predictions(theta, t):
# complete
def lnprior(theta):
# complete
theta_0 = # complete
cov = # complete
pos, lnpost, acc = mh_mcmc( # complete
# complete
plt.ylabel('ecosw')
plt.xlabel('Step');
corner.corner( # complete
Explanation: Problem 2f
Ford et al. (2006) suggest mitigating this issue by reparameterizing the orbital parameters $e$ and $\omega$ as $e cos\omega$ and $e sin\omega$. Modify the get_model_predictions and lnprior functions accordingly and rerun the MCMC. Does performance improve?
Note: the efficiency of a basic MCMC in this situation is never going to be excellent. We'll talk more about challenging cases like this and how to deal with them in later lectures!
End of explanation |
729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced tour of the Bayesian Optimization package
Step1: 1. Suggest-Evaluate-Register Paradigm
Internally the maximize method is simply a wrapper around the methods suggest, probe, and register. If you need more control over your optimization loops the Suggest-Evaluate-Register paradigm should give you that extra flexibility.
For an example of running the BayesianOptimization in a distributed fashion (where the function being optimized is evaluated concurrently in different cores/machines/servers), checkout the async_optimization.py script in the examples folder.
Step2: Notice that the evaluation of the blackbox function will NOT be carried out by the optimizer object. We are simulating a situation where this function could be being executed in a different machine, maybe it is written in another language, or it could even be the result of a chemistry experiment. Whatever the case may be, you can take charge of it and as long as you don't invoke the probe or maximize methods directly, the optimizer object will ignore the blackbox function.
Step3: One extra ingredient we will need is an UtilityFunction instance. In case it is not clear why, take a look at the literature to understand better how this method works.
Step4: The suggest method of our optimizer can be called at any time. What you get back is a suggestion for the next parameter combination the optimizer wants to probe.
Notice that while the optimizer hasn't observed any points, the suggestions will be random. However, they will stop being random and improve in quality the more points are observed.
Step5: You are now free to evaluate your function at the suggested point however/whenever you like.
Step6: Last thing left to do is to tell the optimizer what target value was observed.
Step7: 1.1 The maximize loop
And that's it. By repeating the steps above you recreate the internals of the maximize method. This should give you all the flexibility you need to log progress, hault execution, perform concurrent evaluations, etc.
Step8: 2. Dealing with discrete parameters
There is no principled way of dealing with discrete parameters using this package.
Ok, now that we got that out of the way, how do you do it? You're bound to be in a situation where some of your function's parameters may only take on discrete values. Unfortunately, the nature of bayesian optimization with gaussian processes doesn't allow for an easy/intuitive way of dealing with discrete parameters - but that doesn't mean it is impossible. The example below showcases a simple, yet reasonably adequate, way to dealing with discrete parameters.
Step9: 3. Tuning the underlying Gaussian Process
The bayesian optimization algorithm works by performing a gaussian process regression of the observed combination of parameters and their associated target values. The predicted parameter$\rightarrow$target hyper-surface (and its uncertainty) is then used to guide the next best point to probe.
3.1 Passing parameter to the GP
Depending on the problem it could be beneficial to change the default parameters of the underlying GP. You can simply pass GP parameters to the maximize method directly as you can see below
Step10: Another alternative, specially useful if you're calling maximize multiple times or optimizing outside the maximize loop, is to call the set_gp_params method.
Step12: 3.2 Tuning the alpha parameter
When dealing with functions with discrete parameters,or particularly erratic target space it might be beneficial to increase the value of the alpha parameter. This parameters controls how much noise the GP can handle, so increase it whenever you think that extra flexibility is needed.
3.3 Changing kernels
By default this package uses the Mattern 2.5 kernel. Depending on your use case you may find that tunning the GP kernel could be beneficial. You're on your own here since these are very specific solutions to very specific problems.
Observers Continued
Observers are objects that subscribe and listen to particular events fired by the BayesianOptimization object.
When an event gets fired a callback function is called with the event and the BayesianOptimization instance passed as parameters. The callback can be specified at the time of subscription. If none is given it will look for an update method from the observer.
Step13: Alternatively you have the option to pass a completely different callback.
Step14: For a list of all default events you can checkout DEFAULT_EVENTS | Python Code:
from bayes_opt import BayesianOptimization
Explanation: Advanced tour of the Bayesian Optimization package
End of explanation
# Let's start by defining our function, bounds, and instanciating an optimization object.
def black_box_function(x, y):
return -x ** 2 - (y - 1) ** 2 + 1
Explanation: 1. Suggest-Evaluate-Register Paradigm
Internally the maximize method is simply a wrapper around the methods suggest, probe, and register. If you need more control over your optimization loops the Suggest-Evaluate-Register paradigm should give you that extra flexibility.
For an example of running the BayesianOptimization in a distributed fashion (where the function being optimized is evaluated concurrently in different cores/machines/servers), checkout the async_optimization.py script in the examples folder.
End of explanation
optimizer = BayesianOptimization(
f=None,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
Explanation: Notice that the evaluation of the blackbox function will NOT be carried out by the optimizer object. We are simulating a situation where this function could be being executed in a different machine, maybe it is written in another language, or it could even be the result of a chemistry experiment. Whatever the case may be, you can take charge of it and as long as you don't invoke the probe or maximize methods directly, the optimizer object will ignore the blackbox function.
End of explanation
from bayes_opt import UtilityFunction
utility = UtilityFunction(kind="ucb", kappa=2.5, xi=0.0)
Explanation: One extra ingredient we will need is an UtilityFunction instance. In case it is not clear why, take a look at the literature to understand better how this method works.
End of explanation
next_point_to_probe = optimizer.suggest(utility)
print("Next point to probe is:", next_point_to_probe)
Explanation: The suggest method of our optimizer can be called at any time. What you get back is a suggestion for the next parameter combination the optimizer wants to probe.
Notice that while the optimizer hasn't observed any points, the suggestions will be random. However, they will stop being random and improve in quality the more points are observed.
End of explanation
target = black_box_function(**next_point_to_probe)
print("Found the target value to be:", target)
Explanation: You are now free to evaluate your function at the suggested point however/whenever you like.
End of explanation
optimizer.register(
params=next_point_to_probe,
target=target,
)
Explanation: Last thing left to do is to tell the optimizer what target value was observed.
End of explanation
for _ in range(5):
next_point = optimizer.suggest(utility)
target = black_box_function(**next_point)
optimizer.register(params=next_point, target=target)
print(target, next_point)
print(optimizer.max)
Explanation: 1.1 The maximize loop
And that's it. By repeating the steps above you recreate the internals of the maximize method. This should give you all the flexibility you need to log progress, hault execution, perform concurrent evaluations, etc.
End of explanation
def func_with_discrete_params(x, y, d):
# Simulate necessity of having d being discrete.
assert type(d) == int
return ((x + y + d) // (1 + d)) / (1 + (x + y) ** 2)
def function_to_be_optimized(x, y, w):
d = int(w)
return func_with_discrete_params(x, y, d)
optimizer = BayesianOptimization(
f=function_to_be_optimized,
pbounds={'x': (-10, 10), 'y': (-10, 10), 'w': (0, 5)},
verbose=2,
random_state=1,
)
optimizer.maximize(alpha=1e-3)
Explanation: 2. Dealing with discrete parameters
There is no principled way of dealing with discrete parameters using this package.
Ok, now that we got that out of the way, how do you do it? You're bound to be in a situation where some of your function's parameters may only take on discrete values. Unfortunately, the nature of bayesian optimization with gaussian processes doesn't allow for an easy/intuitive way of dealing with discrete parameters - but that doesn't mean it is impossible. The example below showcases a simple, yet reasonably adequate, way to dealing with discrete parameters.
End of explanation
optimizer = BayesianOptimization(
f=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
optimizer.maximize(
init_points=1,
n_iter=5,
# What follows are GP regressor parameters
alpha=1e-3,
n_restarts_optimizer=5
)
Explanation: 3. Tuning the underlying Gaussian Process
The bayesian optimization algorithm works by performing a gaussian process regression of the observed combination of parameters and their associated target values. The predicted parameter$\rightarrow$target hyper-surface (and its uncertainty) is then used to guide the next best point to probe.
3.1 Passing parameter to the GP
Depending on the problem it could be beneficial to change the default parameters of the underlying GP. You can simply pass GP parameters to the maximize method directly as you can see below:
End of explanation
optimizer.set_gp_params(normalize_y=True)
Explanation: Another alternative, specially useful if you're calling maximize multiple times or optimizing outside the maximize loop, is to call the set_gp_params method.
End of explanation
from bayes_opt.event import DEFAULT_EVENTS, Events
optimizer = BayesianOptimization(
f=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
class BasicObserver:
def update(self, event, instance):
Does whatever you want with the event and `BayesianOptimization` instance.
print("Event `{}` was observed".format(event))
my_observer = BasicObserver()
optimizer.subscribe(
event=Events.OPTIMIZATION_STEP,
subscriber=my_observer,
callback=None, # Will use the `update` method as callback
)
Explanation: 3.2 Tuning the alpha parameter
When dealing with functions with discrete parameters,or particularly erratic target space it might be beneficial to increase the value of the alpha parameter. This parameters controls how much noise the GP can handle, so increase it whenever you think that extra flexibility is needed.
3.3 Changing kernels
By default this package uses the Mattern 2.5 kernel. Depending on your use case you may find that tunning the GP kernel could be beneficial. You're on your own here since these are very specific solutions to very specific problems.
Observers Continued
Observers are objects that subscribe and listen to particular events fired by the BayesianOptimization object.
When an event gets fired a callback function is called with the event and the BayesianOptimization instance passed as parameters. The callback can be specified at the time of subscription. If none is given it will look for an update method from the observer.
End of explanation
def my_callback(event, instance):
print("Go nuts here!")
optimizer.subscribe(
event=Events.OPTIMIZATION_START,
subscriber="Any hashable object",
callback=my_callback,
)
optimizer.maximize(init_points=1, n_iter=2)
Explanation: Alternatively you have the option to pass a completely different callback.
End of explanation
DEFAULT_EVENTS
Explanation: For a list of all default events you can checkout DEFAULT_EVENTS
End of explanation |
730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
project bonhomie ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ classification variables preparation
This notebook takes ROOT files of ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ samples, applies a selection, impudes some values and then exports the resulting data to CSV.
Step1: selection
Step2: imputation
Step3: clustered correlations ${t\bar{t}H}$
Step4: clustered correlations ${t\bar{t}b\bar{b}}$
Step5: strongest absolute correlations with classifications
Step6: rescale
Step7: save to CSV | Python Code:
import datetime
import keras
from keras import activations
from keras.datasets import mnist
from keras.layers import Dense, Flatten
from keras.layers import Conv1D, Conv2D, MaxPooling1D, MaxPooling2D, Dropout
from keras.models import Sequential
from keras.utils import plot_model
from matplotlib import gridspec
import matplotlib.pylab as plt
from matplotlib.ticker import NullFormatter, NullLocator, MultipleLocator
import pandas as pd
pd.set_option("display.max_rows", 500)
pd.set_option("display.max_columns", 500)
import numpy as np
import pandas as pd
from scipy import stats
from sklearn.metrics import auc, roc_curve
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import seaborn as sns
sns.set(style = 'ticks')
sns.set_palette('husl')
import sqlite3
import talos as ta
from vis.visualization import visualize_activation
from vis.visualization import visualize_saliency
from vis.utils import utils
import warnings
warnings.filterwarnings("ignore")
import root_pandas
%matplotlib inline
plt.rcParams["figure.figsize"] = [17, 14]
variables = [
"nElectrons",
"nMuons",
"nJets",
"nBTags_70",
"dRbb_avg_Sort4",
"dRbb_MaxPt_Sort4",
"dEtajj_MaxdEta",
"Mbb_MindR_Sort4",
"Mjj_MindR",
"nHiggsbb30_Sort4",
"HT_jets",
"dRlepbb_MindR_Sort4",
"Aplanarity_jets",
"H1_all",
"TTHReco_best_TTHReco",
"TTHReco_best_Higgs_mass",
"TTHReco_best_Higgsbleptop_mass",
"TTHReco_best_bbHiggs_dR",
"TTHReco_withH_best_Higgsttbar_dR",
"TTHReco_best_Higgsleptop_dR",
"TTHReco_best_b1Higgsbhadtop_dR",
"LHD_Discriminant"
]
filenames_ttH = ["ttH_group.phys-higgs.11468583._000005.out.root"]
filenames_ttbb = ["ttbb_group.phys-higgs.11468624._000005.out.root"]
ttH = root_pandas.read_root(filenames_ttH, "nominal_Loose", columns = variables)
ttbb = root_pandas.read_root(filenames_ttbb, "nominal_Loose", columns = variables)
ttH["classification"] = 1
ttbb["classification"] = 0
df = pd.concat([ttH, ttbb])
df.head()
Explanation: project bonhomie ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ classification variables preparation
This notebook takes ROOT files of ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ samples, applies a selection, impudes some values and then exports the resulting data to CSV.
End of explanation
selection_ejets = "(nElectrons == 1) & (nJets >= 4)"
selection_mujets = "(nMuons == 1) & (nJets >= 4)"
selection_ejets_5JE4BI = "(nElectrons == 1) & (nJets == 4) & (nBTags_70 >= 4)"
selection_ejets_6JI4BI = "(nElectrons == 1) & (nJets == 6) & (nBTags_70 >= 4)"
df = df.query(selection_ejets)
df.drop(["nElectrons", "nMuons", "nJets", "nBTags_70"], axis = 1, inplace = True)
df.head()
Explanation: selection
End of explanation
df["TTHReco_best_TTHReco"].replace( -9, -1, inplace = True)
df["TTHReco_best_Higgs_mass"].replace( -9, -1, inplace = True)
df["TTHReco_best_Higgsbleptop_mass"].replace( -9, -1, inplace = True)
df["TTHReco_best_bbHiggs_dR"].replace( -9, -1, inplace = True)
df["TTHReco_withH_best_Higgsttbar_dR"].replace(-9, -1, inplace = True)
df["TTHReco_best_Higgsleptop_dR"].replace( -9, -1, inplace = True)
df["TTHReco_best_b1Higgsbhadtop_dR"].replace( -9, -1, inplace = True)
df["LHD_Discriminant"].replace( -9, -1, inplace = True)
df.describe()
df.hist();
Explanation: imputation
End of explanation
_df = df.query("classification == 1").drop("classification", axis = 1)
plot = sns.clustermap(_df.corr())
plt.setp(plot.ax_heatmap.get_yticklabels(), rotation = 0);
Explanation: clustered correlations ${t\bar{t}H}$
End of explanation
_df = df.query("classification == 0").drop("classification", axis = 1)
plot = sns.clustermap(_df.corr())
plt.setp(plot.ax_heatmap.get_yticklabels(), rotation = 0);
Explanation: clustered correlations ${t\bar{t}b\bar{b}}$
End of explanation
_df = df.corr()["classification"].abs().sort_values(ascending = False).to_frame()[1:]
_df
plt.rcParams["figure.figsize"] = [8, 8]
sns.barplot(_df["classification"], _df.index);
plt.xlabel('absolute correlation with class')
plt.show();
Explanation: strongest absolute correlations with classifications
End of explanation
if False:
scaler = MinMaxScaler()
variables_rescale = [variable for variable in list(df.columns) if variable != "classification"]
df[variables_rescale] = scaler.fit_transform(df[variables_rescale])
df.head()
Explanation: rescale
End of explanation
df.to_csv("ttHbb_data.csv", index=False)
Explanation: save to CSV
End of explanation |
731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean Variance - Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn Rate Tune - Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
a = 0.1
b = 0.9
X_min = 0
X_max = 255
return a + (((image_data - X_min)*(b - a)) / (X_max - X_min))
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean Variance - Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
# Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# epochs = 1
# learning_rate = 0.1
epochs = 5
learning_rate = 0.2
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i * batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn Rate Tune - Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i * batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The ultimate objective of this portion of the hack it to create Metric Analysis Framework metrics to determine the value of a given opsim run for the intermediate mass MACHO science.
Note that to run this notebook you will need to have installed MAF. Follow the directions at
Step1: General Input
Catalog
Step2: SQL Query
Step3: Metrics
Step4: Slicer
Let's look at the MAF results in the galactic coordinate system since this correlates nicely with stellar number density. (More stars, more expected number of microlensing events.)
Step5: Plot functions and customization
Step6: Bundles
Step7: Plot a light curve
This is largely based on
Step8: Note that something doesn't seem right about the light curve above since there is >mag extinction towards the center of the Milky Way for bluer bands, yet these are the same 5sigma magnitude depths as towards the LMC (see below).
Note that we could take the 5 sigma depth and translate that into a photometric unertainty for a given magnitude magnification event.
LMC Example
Step9: Mass metric example
We wish to build metric appropriate for detecting high mass
microlensing events in the LSST data set. Let us consider
high mass as 10-200 M_sol. These have time scales of 1.2 to 5.5 years,
and one would like much better than nyquist sampling of the events.
Our proposal is to evaluate the cadence in one healpy map per integral M_sol mass.
The astrophysics of microlensing would then be in other maps
Step10: minion_1016_sqlite.db is the baseline LSST cadence from 2016
Step11: astro_lsst_01_1064.sqlite.db is the "hacked" rolling cadence from the SN group and Rahul Biswas. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# import lsst sims maf modules
import lsst.sims.maf
import lsst.sims.maf.db as db
import lsst.sims.maf.metrics as lsst_metrics
import lsst.sims.maf.slicers as slicers
import lsst.sims.maf.stackers as stackers
import lsst.sims.maf.plots as plots
import lsst.sims.maf.metricBundles as metricBundles
# import macho modules
import metrics
# make it so that autoreload of modules works
from IPython import get_ipython
ipython = get_ipython()
if '__IPYTHON__' in globals():
ipython.magic('load_ext autoreload')
ipython.magic('autoreload 2')
%matplotlib inline
Explanation: The ultimate objective of this portion of the hack it to create Metric Analysis Framework metrics to determine the value of a given opsim run for the intermediate mass MACHO science.
Note that to run this notebook you will need to have installed MAF. Follow the directions at:
https://github.com/wadawson/sims_maf_contrib/blob/master/tutorials/Index.ipynb
To run this notebook you should have,
setup sims_maf
within the terminal where you ran ipython, i.e.
ipython notebook IntroductionNotebook.ipynb
In this directory you should have downloaded the survey simulation database
wget http://ops2.lsst.org/runs/reference_run/minion_1016/minion_1016_sqlite.db.gz
gzip -d minion_1016_sqlite.db.gz
It may also be helpful to look at
https://github.com/wadawson/sims_maf_contrib/blob/master/tutorials/Introduction%20Notebook.ipynb
before getting started on this notebook, since this notebook will skip some of the pedantic expositions.
End of explanation
dir = '/data/des40.a/data/marcelle/lsst-gw/OperationsSimulatorBenchmarkSurveys/'
opsdb = db.OpsimDatabase(dir+'minion_1016_sqlite.db')
outDir = 'notebook_output'
Explanation: General Input
Catalog
End of explanation
# Initially let's just look at the number of observations in r-band after 2 years with default kwargs
sql = 'filter="r" and night < %i' % (365.25*10)
Explanation: SQL Query
End of explanation
# Calculate the median gap between consecutive observations within a night, in hours.
metric_intranightgap = lsst_metrics.IntraNightGapsMetric(reduceFunc=np.median)
# Calculate the median gap between consecutive observations between nights, in days.
metric_internightgap = lsst_metrics.InterNightGapsMetric(reduceFunc=np.median)
# Uniformity of time between consecutive visits on short time scales:
'''
timeCol : str, optional
The column containing the 'time' value. Default expMJD.
minNvisits : int, optional
The minimum number of visits required within the time interval (dTmin to dTmax).
Default 100.
dTmin : float, optional
The minimum dTime to consider (in days). Default 40 seconds.
dTmax : float, optional
The maximum dTime to consider (in days). Default 30 minutes.
'''
metric_rapidrevisit = lsst_metrics.RapidRevisitMetric(timeCol='expMJD', minNvisits=10,
dTmin=40.0 / 60.0 / 60.0 / 24.0, dTmax=30.0 / 60.0 / 24.0)
# Number of revisits with time spacing less than 24 hours
metric_nrevisit24hr = lsst_metrics.NRevisitsMetric(dT=24*60)
# Use the custom metric in the macho metrics file, which asks whether the light curve
# allows a detection of a mass solar_mass lens
detectable = metrics.massMetric(mass=30.)
Explanation: Metrics
End of explanation
# Let's look at the metric results in the galactic coordinate fram
slicer = slicers.HealpixSlicer(latCol='galb', lonCol='gall', nside=16)
Explanation: Slicer
Let's look at the MAF results in the galactic coordinate system since this correlates nicely with stellar number density. (More stars, more expected number of microlensing events.)
End of explanation
#plotFuncs = [plots.HealpixSkyMap()] # only plot the sky maps for now
# Customize the plot format
plotDict_intranightgap = {'colorMin':0, 'colorMax': 1., 'cbarFormat': '%0.2f'} # Set the max on the color bar
plotDict_internightgap = {'colorMin':0,'colorMax': 10.} # Set the max on the color bar
plotDict_rapidrevisit = {'cbarFormat': '%0.2f'}
plotDict_nrevisit24hr = {'colorMin':0,'colorMax': 300.}
plotDict_detectable = {'colorMin':0,'colorMax': 1.}
Explanation: Plot functions and customization
End of explanation
# Create the MAF bundles for each plot
bundle_intranightgap = metricBundles.MetricBundle(metric_intranightgap, slicer, sql, plotDict=plotDict_intranightgap)#, plotFuncs=plotFuncs)
bundle_internightgap = metricBundles.MetricBundle(metric_internightgap, slicer, sql, plotDict=plotDict_internightgap)#, plotFuncs=plotFuncs)
bundle_rapidrevisit = metricBundles.MetricBundle(metric_rapidrevisit, slicer, sql, plotDict=plotDict_rapidrevisit)#, plotFuncs=plotFuncs)
bundle_nrevisit24hr = metricBundles.MetricBundle(metric_nrevisit24hr, slicer, sql, plotDict=plotDict_nrevisit24hr)#, plotFuncs=plotFuncs)
bundle_detectable = metricBundles.MetricBundle(detectable, slicer, sql, plotDict=plotDict_detectable)#, plotFuncs=plotFuncs)
# Create the query bundle dictonary to run all of the queries in the same run
bdict = {'intragap':bundle_intranightgap, 'intergap':bundle_internightgap,
'rapidrevisit':bundle_rapidrevisit, 'nrevisit24hr':bundle_nrevisit24hr,
'detectable':bundle_detectable}
bg = metricBundles.MetricBundleGroup(bdict, opsdb, outDir=outDir)
# Run the queries
bg.runAll()
# Create the plots
bg.plotAll(closefigs=False)
Explanation: Bundles
End of explanation
outDir ='LightCurve'
dbFile = 'minion_1016_sqlite.db'
resultsDb = db.ResultsDb(outDir=outDir)
filters = ['u','g','r','i','z','y']
colors={'u':'cyan','g':'g','r':'y','i':'r','z':'m', 'y':'k'}
# Set RA, Dec for a single point in the sky. in radians. Galactic Center.
ra = np.radians(266.4168)
dec = np.radians(-29.00)
# SNR limit (Don't use points below this limit)
snrLimit = 5.
# Demand this many points above SNR limit before plotting LC
nPtsLimit = 6
# The pass metric just passes data straight through.
metric = metrics.PassMetric(cols=['filter','fiveSigmaDepth','expMJD'])
slicer = slicers.UserPointsSlicer(ra,dec,lonCol='ditheredRA',latCol='ditheredDec')
sql = ''
bundle = metricBundles.MetricBundle(metric,slicer,sql)
bg = metricBundles.MetricBundleGroup({0:bundle}, opsdb,
outDir=outDir, resultsDb=resultsDb)
bg.runAll()
bundle.metricValues.data[0]['filter']
dayZero = bundle.metricValues.data[0]['expMJD'].min()
for fname in filters:
good = np.where(bundle.metricValues.data[0]['filter'] == fname)
plt.scatter(bundle.metricValues.data[0]['expMJD'][good]- dayZero,
bundle.metricValues.data[0]['fiveSigmaDepth'][good],
c = colors[fname], label=fname)
plt.xlabel('Day')
plt.ylabel('5$\sigma$ depth')
plt.legend(scatterpoints=1, loc="upper left", bbox_to_anchor=(1,1))
Explanation: Plot a light curve
This is largely based on:
http://localhost:8888/notebooks/Git/sims_maf_contrib/tutorials/PullLightCurves.ipynb
End of explanation
# Set RA, Dec for a single point in the sky. in radians. LMC.
ra = np.radians(80.8942)
dec = np.radians(-69.756)
# SNR limit (Don't use points below this limit)
snrLimit = 5.
# Demand this many points above SNR limit before plotting LC
nPtsLimit = 6
# The pass metric just passes data straight through.
metric = metrics.PassMetric(cols=['filter','fiveSigmaDepth','expMJD'])
slicer = slicers.UserPointsSlicer(ra,dec,lonCol='ditheredRA',latCol='ditheredDec')
sql = ''
bundle = metricBundles.MetricBundle(metric,slicer,sql)
bg = metricBundles.MetricBundleGroup({0:bundle}, opsdb,
outDir=outDir, resultsDb=resultsDb)
bg.runAll()
bundle.metricValues.data[0]['filter']
dayZero = bundle.metricValues.data[0]['expMJD'].min()
for fname in filters:
good = np.where(bundle.metricValues.data[0]['filter'] == fname)
plt.scatter(bundle.metricValues.data[0]['expMJD'][good]- dayZero,
bundle.metricValues.data[0]['fiveSigmaDepth'][good],
c = colors[fname], label=fname)
plt.xlabel('Day')
plt.ylabel('5$\sigma$ depth')
plt.legend(scatterpoints=1, loc="upper left", bbox_to_anchor=(1,1))
Explanation: Note that something doesn't seem right about the light curve above since there is >mag extinction towards the center of the Milky Way for bluer bands, yet these are the same 5sigma magnitude depths as towards the LMC (see below).
Note that we could take the 5 sigma depth and translate that into a photometric unertainty for a given magnitude magnification event.
LMC Example
End of explanation
import numpy as np
import matplotlib.pyplot as plt
# import lsst sims maf modules
import lsst.sims.maf
import lsst.sims.maf.db as db
import lsst.sims.maf.metrics as lsst_metrics
import lsst.sims.maf.slicers as slicers
import lsst.sims.maf.stackers as stackers
import lsst.sims.maf.plots as plots
import lsst.sims.maf.metricBundles as metricBundles
# import macho modules
import metrics
# make it so that autoreload of modules works
from IPython import get_ipython
ipython = get_ipython()
if '__IPYTHON__' in globals():
ipython.magic('load_ext autoreload')
ipython.magic('autoreload 2')
%matplotlib inline
Explanation: Mass metric example
We wish to build metric appropriate for detecting high mass
microlensing events in the LSST data set. Let us consider
high mass as 10-200 M_sol. These have time scales of 1.2 to 5.5 years,
and one would like much better than nyquist sampling of the events.
Our proposal is to evaluate the cadence in one healpy map per integral M_sol mass.
The astrophysics of microlensing would then be in other maps: the distances to the
stars and the projected dark matter mass density are the primary determinants of
the rate per pixel. Another component is the length of time needed to measure
an event versus the time available: microlensing events are independent and there
are 5 chances in a 5 year window for a single star to undergo a 1 year lensing event.
This can be thought of as a cumulative effective search time, which is then a mass
dependent quantity.
The basic plan is to construct maps that can be multiplied together to form the
macho detection efficiency.
The very first map that we need is a detection probability map from the cadence;
actually this is likely to be a heaviside step function map.
The timescale of a microlensing event is proportional to sqrt(M_sol): time = 1.2 sqrt(M_sol/10.) yrs.
Let us invent a demand for 30 visits per time scale. Furthermore, parallax can effect
the shape of the lightcurve over a year scale, so let us invent a demand of
at least 10 visits per year.
M_sol time_scale (yrs) N_min 10*round(time_scale) N_visits_required
10 1.2 30 10 30
30 2.1 30 20 30
50 2.8 30 30 30
70 3.3 30 30 30
100 3.9 30 40 40
150 4.8 30 50 50
200 5.5 30 60 60
Microlensing is achromatic, so we can use any filter. Initially we'll start with i.
End of explanation
dir = '/data/des40.a/data/marcelle/lsst-gw/OperationsSimulatorBenchmarkSurveys/'
opsdb = db.OpsimDatabase(dir+'minion_1016_sqlite.db')
outDir = 'notebook_output'
# Initially let's just look at the number of observations in r-band after 2 years with default kwargs
nyears = 5.
mass = 30.
sql = 'filter="i" and night < %i' % (365.25*nyears)
# Use the custom metric in the macho metrics file, which asks whether the light curve
# allows a detection of a mass solar_mass lens
detectable = metrics.massMetric(mass=mass)
# Let's look at the metric results in the galactic coordinate fram
slicer = slicers.HealpixSlicer(latCol='galb', lonCol='gall', nside=32)
plotDict_detectable = {'colorMin':0,'colorMax': 1.}
bundle_detectable = metricBundles.MetricBundle(detectable, slicer, sql, plotDict=plotDict_detectable)
# Create the query bundle dictonary to run all of the queries in the same run
bdict = {'detectable':bundle_detectable}
bg = metricBundles.MetricBundleGroup(bdict, opsdb, outDir=outDir)
# Run the queries
bg.runAll()
# Create the plots
bg.plotAll(closefigs=False)
dir = '/data/des40.a/data/marcelle/lsst-gw/OperationsSimulatorBenchmarkSurveys/'
opsdb = db.OpsimDatabase(dir+'astro_lsst_01_1064_sqlite.db')
outDir = 'notebook_output'
Explanation: minion_1016_sqlite.db is the baseline LSST cadence from 2016
End of explanation
# Initially let's just look at the number of observations in r-band after 2 years with default kwargs
nyears = 5.
mass = 30.
sql = 'filter="i" and night < %i' % (365.25*nyears)
# Use the custom metric in the macho metrics file, which asks whether the light curve
# allows a detection of a mass solar_mass lens
detectable = metrics.massMetric(mass=mass)
# Let's look at the metric results in the galactic coordinate fram
slicer = slicers.HealpixSlicer(latCol='galb', lonCol='gall', nside=32)
plotDict_detectable = {'colorMin':0,'colorMax': 1.}
bundle_detectable = metricBundles.MetricBundle(detectable, slicer, sql, plotDict=plotDict_detectable)
# Create the query bundle dictonary to run all of the queries in the same run
bdict = {'detectable':bundle_detectable}
bg = metricBundles.MetricBundleGroup(bdict, opsdb, outDir=outDir)
# Run the queries
bg.runAll()
# Create the plots
bg.plotAll(closefigs=False)
Explanation: astro_lsst_01_1064.sqlite.db is the "hacked" rolling cadence from the SN group and Rahul Biswas.
End of explanation |
733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bio.Pairwise2 optimization
This project aims to optimize alignment algorithms from pairwise2 module of BioPython library.
Especially algorithms for local alignments of this library are considerably slow.
1. Generate samples
The first step of my project was to measure actual performance of pairwise2 module. To test performance of alignments i wrote a simple script which generates some random samples for alignment tasks.
Step1: 2. Measure Bio.pairwise2 performance
I've already generated some linearly distributed sample sequences of length ~100 - ~3000.
Let's use them to compare speeds of pairwise2 methods.
We will perform alignment of all test sequences and draw nice plots!
Attention
Step2: 3. Analize Bio.pairwise2 performance
As we can see local alignment methods are ridiculously slow! Why is it so? To see what takes so long I inspected Bio.pairwise2 code. The code contains a main method "align" which performs alignment by invoking some other functions. Its code can be divided into several parts
Step3: 4. Find the bottleneck
As we can see, time spent on global algorithms seems reasonable. However, in local alignment, vast amount of time is taken by finding and filtering start, which should not be the case.
I looked into the original code for finding starts. It looks like this
Step10: 6. Test equivalence of results
My task is too optimize existing library, so it is very important that its behavior doesn't change so it can easily replace the original.
I assured correctness of results by creating a unit test module. It tests for several methods that results from optimized library are exactly same as results from the original.
I don't perform unit test on generated samples. Instead I manually create some test samples for unit tests. They include special cases as empty sequences, two same sequences etc.
Executing all unittests takes really long (because we test original local alignment methods which are terribly slow), however done once it assures that i didn't mess up anything. | Python Code:
from generate_samples import generate_sample, PROTEIN_ALPHABET
# Let's generate two sample sequences of proteins. Length of first sequence will be 50.
seq1, seq2 = generate_sample(PROTEIN_ALPHABET, 50)
print "Sequence 1: " + seq1
print "Sequence 2: " + seq2
Explanation: Bio.Pairwise2 optimization
This project aims to optimize alignment algorithms from pairwise2 module of BioPython library.
Especially algorithms for local alignments of this library are considerably slow.
1. Generate samples
The first step of my project was to measure actual performance of pairwise2 module. To test performance of alignments i wrote a simple script which generates some random samples for alignment tasks.
End of explanation
from Bio import pairwise2
from test_optimization import run_compare_test
# Compare globalxx and localxx methods.
# These methods yield global and local alignments for constant gap penalty = 1 and match score = 1
# Search only for score (do not backtrack alignments - much faster)
run_compare_test("Test Bio pairwise2 globalxx and localxx (score only)",
[pairwise2.align.globalxx, pairwise2.align.localxx], score_only=True)
# Backtrack alignments
run_compare_test("Test Bio pairwise2 globalxx and localxx",
[pairwise2.align.globalxx, pairwise2.align.localxx])
Explanation: 2. Measure Bio.pairwise2 performance
I've already generated some linearly distributed sample sequences of length ~100 - ~3000.
Let's use them to compare speeds of pairwise2 methods.
We will perform alignment of all test sequences and draw nice plots!
Attention: scripts below may take some time (up to ~ 2 minutes) to execute!
End of explanation
from analyze_bio_speed import perform_test
from generate_samples import generate_sample, PROTEIN_ALPHABET
# Let's generate 3 samples of lengths ~ 1000
samples_count, seq_len = 3, 1000
seq_pairs = [generate_sample(PROTEIN_ALPHABET, seq_len)] * samples_count
# Analyze execution time of localxx with score_only=True
perform_test(seq_len, seq_pairs, "localxx", "Analysis of Bio pairwise2 localxx execution time (score only)", score_only=True)
# Analyze execution time of globalxx with score_only=True
perform_test(seq_len, seq_pairs, "globalxx", "Analysis of Bio pairwise2 globalxx execution time (score only)", score_only=True)
# Analyze execution time of localxx with backtracing alignments
perform_test(seq_len, seq_pairs, "localxx", "Analysis of Bio pairwise2 localxx execution time")
# Analyze execution time of globalxx with backtracing alignments
perform_test(seq_len, seq_pairs, "globalxx", "Analysis of Bio pairwise2 globalxx execution time")
Explanation: 3. Analize Bio.pairwise2 performance
As we can see local alignment methods are ridiculously slow! Why is it so? To see what takes so long I inspected Bio.pairwise2 code. The code contains a main method "align" which performs alignment by invoking some other functions. Its code can be divided into several parts:
Preparation (initialization of some structures, checking arguments correctness etc.)
Score matrix preparation (actually implemented in C)
Finding potential starts of alignments in score matrix (especially important in local alignment)
Filtering these starts somehow
Recover alignments (transform them to readable form)
I created my class deriving base class of alignment from pairwise2. I overrode this "align" method by copying original code and adding measuring of time of its parts. Then i wrote a script which executes this code several times for a few sample sequence pairs and plots execution times of each part.
Again, code below may take some time to execute (~ 3 minutes).
End of explanation
# Let's see how new code affects performance
# Again - this methods can take some time, ~ 2 minutes each
from Bio import pairwise2
from lib import optimized_pairwise2
from test_optimization import run_compare_test
# Compare original and optimized localxx methods.
description = "Compare localxx methods (score only)"
print(description)
run_compare_test(description, [pairwise2.align.localxx, optimized_pairwise2.align.localxx], score_only=True)
description = "Compare localxx methods (including alignments)"
print(description)
run_compare_test(description, [pairwise2.align.localxx, optimized_pairwise2.align.localxx])
Explanation: 4. Find the bottleneck
As we can see, time spent on global algorithms seems reasonable. However, in local alignment, vast amount of time is taken by finding and filtering start, which should not be the case.
I looked into the original code for finding starts. It looks like this:
:::python
Fragment of code in "align" method:
finding starts:
starts = _find_start(score_matrix, align_globally)
best_score = max([x[0] for x in starts])
if score_only:
return best_score
tolerance = 0 # This seems to be placeholder for a future feature of giving some tolerance on best score
filtering starts:
starts = [(score, pos) for score, pos in starts
if rint(abs(score - best_score)) <= rint(tolerance)]
_find_start method used in "align" method:
def _find_start(score_matrix, align_globally):
if align_globally:
starts = [(score_matrix[-1][-1], (nrows - 1, ncols - 1))]
else:
starts = []
for row in range(nrows):
for col in range(ncols):
score = score_matrix[row][col]
starts.append((score, (row, col)))
return starts
Let's cosider local alignments olny (performance of global alignments is OK). I see the following problems with the code above:
Method find_start is invoked always, even if score_only=True and we don't want to bactrace alignments
In find_start method, for local alignment, we create a list of starts. Then we append nrows * ncols elements to this list. Given two sequences of lengths ~ 1000, this means 1000000 append operations. As it turned out, this is the bottleneck - appending single elements to list in Python is not so fast.
We filter starts by comparing scores in starts with best_score. Tolerance is 0 anyway, so instead of computing rint(abs(score - best_score)) <== rint(tolerance) we can just take all starts with score == best_score.
5. Fix the bottleneck
I fixed code above, so it does not create matrix of size nrows * ncols, but takes only starts with score == best_score at first.
It also doesn't find starts if we look only for score.
:::python
Fragment of code in "align" method:
nrows, ncols = len(score_matrix), len(score_matrix[0])
if align_globally:
starts = [(score_matrix[nrows-1][ncols-1], (nrows-1,ncols-1))]
if score_only:
return starts[0][0]
else:
best_score = max(score_matrix[row][col] for row, col in itertools.product(xrange(nrows), xrange(ncols)))
if score_only:
return best_score
starts = _find_start(score_matrix, best_score)
_find_start method used in "align" method:
def _find_start(score_matrix, best_score):
nrows, ncols = len(score_matrix), len(score_matrix[0])
return [(best_score, (row, col)) for row, col in itertools.product(xrange(nrows), xrange(ncols))
if score_matrix[row][col] == best_score]
End of explanation
from testdata.test_sequences import get_test_sequences_pairs, all_equal
from run_unittests import AlignmentEquivalenceTestCase
# get all sequence pairs for unit testing
sequence_pairs = [(seq1, seq2) for _, seq1, seq2 in get_test_sequences_pairs(['unit'])]
# creates test suite which tests all methods on all sequence pairs
test_suite = AlignmentsEquivalenceTestCase.get_test_suite(sequence_pairs)
# this takes very very long
unittest.TextTestRunner().run(test_suite)
Explanation: 6. Test equivalence of results
My task is too optimize existing library, so it is very important that its behavior doesn't change so it can easily replace the original.
I assured correctness of results by creating a unit test module. It tests for several methods that results from optimized library are exactly same as results from the original.
I don't perform unit test on generated samples. Instead I manually create some test samples for unit tests. They include special cases as empty sequences, two same sequences etc.
Executing all unittests takes really long (because we test original local alignment methods which are terribly slow), however done once it assures that i didn't mess up anything.
End of explanation |
734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
<img src='notebook_ims/autoencoder_1.png' />
Compressed Representation
A compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!
<img src='notebook_ims/denoising.png' width=60%/>
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Visualize the Data
Step2: Linear Autoencoder
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building a simple autoencoder. The encoder and decoder should be made of one linear layer. The units that connect the encoder and decoder will be the compressed representation.
Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values that match this input value range.
<img src='notebook_ims/simple_autoencoder.png' width=50% />
TODO
Step3: Training
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
We are not concerned with labels in this case, just images, which we can get from the train_loader. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use MSELoss. And compare output images and input images as follows
Step4: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
<img src='notebook_ims/autoencoder_1.png' />
Compressed Representation
A compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!
<img src='notebook_ims/denoising.png' width=60%/>
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
Explanation: Visualize the Data
End of explanation
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class Autoencoder(nn.Module):
def __init__(self, encoding_dim):
super(Autoencoder, self).__init__()
## encoder ##
# linear layer (784 -> encoding_dim)
self.fc1 = nn.Linear(28 * 28, encoding_dim)
## decoder ##
# linear layer (encoding_dim -> input size)
self.fc2 = nn.Linear(encoding_dim, 28*28)
def forward(self, x):
# add layer, with relu activation function
x = F.relu(self.fc1(x))
# output layer (sigmoid for scaling from 0 to 1)
x = F.sigmoid(self.fc2(x))
return x
# initialize the NN
encoding_dim = 32
model = Autoencoder(encoding_dim)
print(model)
Explanation: Linear Autoencoder
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building a simple autoencoder. The encoder and decoder should be made of one linear layer. The units that connect the encoder and decoder will be the compressed representation.
Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values that match this input value range.
<img src='notebook_ims/simple_autoencoder.png' width=50% />
TODO: Build the graph for the autoencoder in the cell below.
The input images will be flattened into 784 length vectors. The targets are the same as the inputs.
The encoder and decoder will be made of two linear layers, each.
The depth dimensions should change as follows: 784 inputs > encoding_dim > 784 outputs.
All layers will have ReLu activations applied except for the final output layer, which has a sigmoid activation.
The compressed representation should be a vector with dimension encoding_dim=32.
End of explanation
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 20
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
images, _ = data
# flatten images
images = images.view(images.size(0), -1)
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
outputs = model(images)
# calculate the loss
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch,
train_loss
))
Explanation: Training
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
We are not concerned with labels in this case, just images, which we can get from the train_loader. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use MSELoss. And compare output images and input images as follows:
loss = criterion(outputs, images)
Otherwise, this is pretty straightfoward training with PyTorch. We flatten our images, pass them into the autoencoder, and record the training loss as we go.
End of explanation
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images_flatten = images.view(images.size(0), -1)
# get sample outputs
output = model(images_flatten)
# prep images for display
images = images.numpy()
# output is resized into a batch of images
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for images, row in zip([images, output], axes):
for img, ax in zip(images, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Blockade Interaction in a Magnetic Field
The interaction between Rydberg atoms is strongly influenced by external electric and magnetic fields. A small magnetic field for instance lifts the Zeeman degeneracy and thus strengthens the Rydberg blockade, especially if there is a non-zero angle between the interatomic and the quantization axis. This has been discussed in M. Saffman, T. G. Walker, and K. Mølmer, “Quantum information with Rydberg atoms”, Rev. Mod. Phys. 82, 2313 (2010). Here we show how to reproduce Fig. 13 using pairinteraction. This Jupyter notebook and the final Python script are available on GitHub.
As described in the introduction, we start our code with some preparations. We will make use of pairinteraction's parallel capacities which is why we load the multiprocessing module if supported by the operating system (in Windows, the module only works with methods defined outside an IPython notebook).
Step1: We begin by defining some constants of our calculation
Step2: Now, we use pairinteraction's StateOne class to define the single-atom state $\left|43d_{5/2},m_j=1/2\right\rangle$ of a Rubudium atom.
Step3: Next, we define how to set up the single atom system. We do this using a function, so we can easily create systems with the magnetic field as a parameter. Inside the function we create a new system by passing the state_one and the cache directory we created to SystemOne.
To limit the size of the basis, we have to choose cutoffs on states which can couple to state_one. This is done by means of the restrict... functions in SystemOne.
Finally, we set the magnetic field to point in $z$-direction with the magnitude given by the argument.
Step4: To investigate the $\left|43d_{5/2},m_j=1/2;43d_{5/2},m_j=1/2\right\rangle$ pair state, we easily combine the same single-atom state twice into a pair state using StateTwo.
Step5: Akin to the single atom system, we now define how to create a two atom system. We want to parametrize this in terms of the single atom system and the interaction angle.
We compose a SystemTwo from two system_one because we are looking at two identical atoms. Again we have to restrict the energy range for coupling. Then we proceed to set the distance between the two atoms and the interaction angle.
To speed up the calculation, we can tell pairinteraction that this system will have some symmetries.
Step6: Now, we can use the definitions from above to compose our calculation.
Step7: With a little boiler-plate, we can then calculate and plot the result with matplotlib. | Python Code:
%matplotlib inline
# Arrays
import numpy as np
# Plotting
import matplotlib.pyplot as plt
# Operating system interfaces
import os, sys
# Parallel computing
if sys.platform != "win32": from multiprocessing import Pool
from functools import partial
# pairinteraction :-)
from pairinteraction import pireal as pi
# Create cache for matrix elements
if not os.path.exists("./cache"):
os.makedirs("./cache")
cache = pi.MatrixElementCache("./cache")
Explanation: Blockade Interaction in a Magnetic Field
The interaction between Rydberg atoms is strongly influenced by external electric and magnetic fields. A small magnetic field for instance lifts the Zeeman degeneracy and thus strengthens the Rydberg blockade, especially if there is a non-zero angle between the interatomic and the quantization axis. This has been discussed in M. Saffman, T. G. Walker, and K. Mølmer, “Quantum information with Rydberg atoms”, Rev. Mod. Phys. 82, 2313 (2010). Here we show how to reproduce Fig. 13 using pairinteraction. This Jupyter notebook and the final Python script are available on GitHub.
As described in the introduction, we start our code with some preparations. We will make use of pairinteraction's parallel capacities which is why we load the multiprocessing module if supported by the operating system (in Windows, the module only works with methods defined outside an IPython notebook).
End of explanation
distance = 10 # µm
bfields = np.linspace(0, 20, 200) # Gauss
Explanation: We begin by defining some constants of our calculation: the spatial separation of the Rydberg atoms and a range of magnetic field we want to iterate over. The units of the respective quantities are given as comments.
End of explanation
state_one = pi.StateOne("Rb", 43, 2, 2.5, 0.5)
Explanation: Now, we use pairinteraction's StateOne class to define the single-atom state $\left|43d_{5/2},m_j=1/2\right\rangle$ of a Rubudium atom.
End of explanation
def setup_system_one(bfield):
system_one = pi.SystemOne(state_one.getSpecies(), cache)
system_one.restrictEnergy(state_one.getEnergy()-100, state_one.getEnergy()+100)
system_one.restrictN(state_one.getN()-2, state_one.getN()+2)
system_one.restrictL(state_one.getL()-2, state_one.getL()+2)
system_one.setBfield([0, 0, bfield])
return system_one
Explanation: Next, we define how to set up the single atom system. We do this using a function, so we can easily create systems with the magnetic field as a parameter. Inside the function we create a new system by passing the state_one and the cache directory we created to SystemOne.
To limit the size of the basis, we have to choose cutoffs on states which can couple to state_one. This is done by means of the restrict... functions in SystemOne.
Finally, we set the magnetic field to point in $z$-direction with the magnitude given by the argument.
End of explanation
state_two = pi.StateTwo(state_one, state_one)
Explanation: To investigate the $\left|43d_{5/2},m_j=1/2;43d_{5/2},m_j=1/2\right\rangle$ pair state, we easily combine the same single-atom state twice into a pair state using StateTwo.
End of explanation
def setup_system_two(system_one, angle):
system_two = pi.SystemTwo(system_one, system_one, cache)
system_two.restrictEnergy(state_two.getEnergy()-5, state_two.getEnergy()+5)
system_two.setDistance(10)
system_two.setAngle(angle)
if angle == 0: system_two.setConservedMomentaUnderRotation([int(2*state_one.getM())])
system_two.setConservedParityUnderInversion(pi.ODD)
system_two.setConservedParityUnderPermutation(pi.ODD)
return system_two
Explanation: Akin to the single atom system, we now define how to create a two atom system. We want to parametrize this in terms of the single atom system and the interaction angle.
We compose a SystemTwo from two system_one because we are looking at two identical atoms. Again we have to restrict the energy range for coupling. Then we proceed to set the distance between the two atoms and the interaction angle.
To speed up the calculation, we can tell pairinteraction that this system will have some symmetries.
End of explanation
def getEnergies(bfield, angle):
# Set up one atom system
system_one = setup_system_one(bfield)
system_one.diagonalize(1e-3)
# Calculate Zeeman shift
zeemanshift = 2*system_one.getHamiltonian().diagonal()[system_one.getBasisvectorIndex(state_one)] # GHz
# Set up two atom system
system_two = setup_system_two(system_one,angle)
system_two.diagonalize(1e-3)
# Calculate blockade interaction
eigenenergies = (system_two.getHamiltonian().diagonal()-zeemanshift)*1e3 # MHz
overlaps = system_two.getOverlap(state_two)
blockade = 1/np.sqrt(np.sum(overlaps/eigenenergies**2))
return blockade
Explanation: Now, we can use the definitions from above to compose our calculation.
End of explanation
plt.xlabel(r"$B$ (Gauss)")
plt.ylabel(r"Blockade (MHz)")
plt.xlim(-0.4,20.4)
plt.ylim(0,0.4)
if sys.platform != "win32":
with Pool() as pool:
energies1 = pool.map(partial(getEnergies, angle=0), bfields)
energies2 = pool.map(partial(getEnergies, angle=np.pi/2), bfields)
else:
energies1 = list(map(partial(getEnergies, angle=0), bfields))
energies2 = list(map(partial(getEnergies, angle=np.pi/2), bfields))
plt.plot(bfields, energies1, 'b-', label=r"$\theta = 0$")
plt.plot(bfields, energies2, 'g-', label=r"$\theta = \pi/2$")
plt.legend(loc=2, bbox_to_anchor=(1.02, 1), borderaxespad=0);
Explanation: With a little boiler-plate, we can then calculate and plot the result with matplotlib.
End of explanation |
736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Training - Lession 2 - classes in Object Oriented Programming
In Python, pretty much every variable is an object, and therefore an instance of some class. But what is a class? A first, basic understanding of a class should be
Step1: Creating objects - instances of a class
Step2: Accessing objects 'fields'
Step3: What are class variables and instance variables?
Class variables are variables attached to the definition of a class. Simply, they are just regular variable definitions inside a class
Instance variables are variables created for each instance of a class. We denote them by adding "self." in front of them.
Examples
Step4: So what's up with that Object Oriented Programming?
Loose definition
It is a kind of methodology and a set of rules for programming. Loosely speaking, it means that we should split our data and functionalities into classes with methods (functions), to follow a specific set of principles.
Some definitions.
Class - a distinct set of variables and procedures, centered around one thing, to store data, and do operations on that data, to communicate, and other stuff.
Field = attribute = a variable defined in a class
Method = procedure - a set of instructions defined in a class
Static method - a function defined in a class, but that does not actually require to create an object of that class!
Self - when a method or field uses 'self', it means it targets the object with which they are associated - "this", "the object I am inside right now", "the object on which I was invoked"
Type - to which class does an object correspond, of which class it is an instance of
Inheritance, composition, relationships.
You will often use words like "parent" or "child", when talking about classes. THe main reason they are used in this context, is to indicate the hierarchy of inheritance. But what is inheritance?
Imagine now, you create a class, which fields are actually objects of other classes. This is composition. It means your objects HAVE other objects. We call this "has-a" relationship.
Now imagine, you want to write classes representing various jobs in some company. So you write classes "Driver", "Recruiter", "Boss". Now you start to think what they would do, and quickly realise there are many things they share, for example, they can get a salary, can leave work, have a break, etc.
The most simple thing would be to write procedures for those actions, separately in each class. But thanks to inheritance, you would need to write it only once, in a BASE CLASS named "Employee". THen, all the others would INHERIT from this base class, getting all those methods for free.
You could say, that "Driver' is an "Employee", and so is "Recruiter". We call this "is-a" relationship.
You can mix those relationships together, to reuse code whenever possible. A rule of thumb is to use inheritance only when it really is the best thing to do, and not overdo it. Excessive inheritance actually looses all advantages of inheritance, and causes lot's of troubles in big projects (it is hard to modify the hierarchy). Another rule of thumb is, that usually inheritance is really good for very similar things, for storing data, and sharing data and procedures when we have a big amount of classes.
Polymorphism.
In simple words, this means that you do not care from which class in hierarchy some method comes from.
Even simpler, that you create your code, you do not worry if some object is an instance of the base class, it's children, or grandchildren, you should be able to use the same methods on each of them.
Principles of OOP - SOLID
The five basic principles describe how to best write classes. Take your time to learn them, and do not rush into advanced programming before understanding these principles. OOP is a paradigm. There are others, like "functional programming", with their own design patterns and principles. This tutorial's scope is "beginner friendly", so we will skip this for now, but come back to them as soon as you feel you can understand them.
https | Python Code:
class Example:
a = 1
print type(Example)
Explanation: Python Training - Lession 2 - classes in Object Oriented Programming
In Python, pretty much every variable is an object, and therefore an instance of some class. But what is a class? A first, basic understanding of a class should be:
A data structure with named variables and procedures.
At this stage of programming, the simpler we keep things, the better. Let's see how we can define a class.
Simple class definition
End of explanation
object_from_class = Example()
print object_from_class
Explanation: Creating objects - instances of a class
End of explanation
object_from_class.a
Explanation: Accessing objects 'fields'
End of explanation
class ClassI:
# Define instance variables in a special method, called a "constructor", that defines what happens when an object is created.
def __init__(self):
self.a = 1
self.b = 2
class ClassC:
# Define class variables normally. They are here, whether you create an object or not.
a = 3
b = 44
instance_of_ClassC = ClassC()
print instance_of_ClassC.a, instance_of_ClassC.b
print ClassC.a
instance_of_ClassI = ClassI()
print instance_of_ClassI.a, instance_of_ClassI.b
# This will cause an error, because to access instance variables, you need an instance of class!
print ClassI.a
Explanation: What are class variables and instance variables?
Class variables are variables attached to the definition of a class. Simply, they are just regular variable definitions inside a class
Instance variables are variables created for each instance of a class. We denote them by adding "self." in front of them.
Examples:
End of explanation
# Let's define some functions.
def multiply(a,b):
return a*b
def count_letter_in_word(word, letter):
track_letters = {}
for character in word:
if character in track_letters:
track_letters[character] += 1
else:
track_letters[character] = 1
if letter in track_letters:
return track_letters[letter]
else:
return 0
# Let's define a class to store a model of data.
# This time, we put more parameters for the constructor: name and age. This allows us to fill the object during the creation.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
# Now let's use our code.
adam = Person("Adam", 18)
print count_letter_in_word(adam.name, "a")
print multiply(adam.age, 10)
Explanation: So what's up with that Object Oriented Programming?
Loose definition
It is a kind of methodology and a set of rules for programming. Loosely speaking, it means that we should split our data and functionalities into classes with methods (functions), to follow a specific set of principles.
Some definitions.
Class - a distinct set of variables and procedures, centered around one thing, to store data, and do operations on that data, to communicate, and other stuff.
Field = attribute = a variable defined in a class
Method = procedure - a set of instructions defined in a class
Static method - a function defined in a class, but that does not actually require to create an object of that class!
Self - when a method or field uses 'self', it means it targets the object with which they are associated - "this", "the object I am inside right now", "the object on which I was invoked"
Type - to which class does an object correspond, of which class it is an instance of
Inheritance, composition, relationships.
You will often use words like "parent" or "child", when talking about classes. THe main reason they are used in this context, is to indicate the hierarchy of inheritance. But what is inheritance?
Imagine now, you create a class, which fields are actually objects of other classes. This is composition. It means your objects HAVE other objects. We call this "has-a" relationship.
Now imagine, you want to write classes representing various jobs in some company. So you write classes "Driver", "Recruiter", "Boss". Now you start to think what they would do, and quickly realise there are many things they share, for example, they can get a salary, can leave work, have a break, etc.
The most simple thing would be to write procedures for those actions, separately in each class. But thanks to inheritance, you would need to write it only once, in a BASE CLASS named "Employee". THen, all the others would INHERIT from this base class, getting all those methods for free.
You could say, that "Driver' is an "Employee", and so is "Recruiter". We call this "is-a" relationship.
You can mix those relationships together, to reuse code whenever possible. A rule of thumb is to use inheritance only when it really is the best thing to do, and not overdo it. Excessive inheritance actually looses all advantages of inheritance, and causes lot's of troubles in big projects (it is hard to modify the hierarchy). Another rule of thumb is, that usually inheritance is really good for very similar things, for storing data, and sharing data and procedures when we have a big amount of classes.
Polymorphism.
In simple words, this means that you do not care from which class in hierarchy some method comes from.
Even simpler, that you create your code, you do not worry if some object is an instance of the base class, it's children, or grandchildren, you should be able to use the same methods on each of them.
Principles of OOP - SOLID
The five basic principles describe how to best write classes. Take your time to learn them, and do not rush into advanced programming before understanding these principles. OOP is a paradigm. There are others, like "functional programming", with their own design patterns and principles. This tutorial's scope is "beginner friendly", so we will skip this for now, but come back to them as soon as you feel you can understand them.
https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
What does this mean in practice?
To write programs, you will need to write code that is readable, powerful and easily modified - using modularity, reusability, algorithms. Python is a language that allows to use all kinds of programming, not only OOP, to suit best your goals.
In practice, we will creates all kinds of Python files:
- libraries of functions
- file with a class definition - only to model data
- file with a class definition - as a "library" with data AND tools that operate on them
- file with the main program - our entry point into running what we wanted to do
- file with test cases - to check if our program works correctly
- ...
From my perspective, design patterns and efficient, clear code is more important than sticking to one paradigm for no reason. For example, you do not need a class just for one method. You also do not need a class if all your methods are static, which means they do not need any "state", like an instance of an object that has a certain state during it's lifetime. Look at this code for example:
End of explanation |
737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Matrix
Step2: Flatten Matrix | Python Code:
# Load library
import numpy as np
Explanation: Title: Flatten A Matrix
Slug: flatten_a_matrix
Summary: How to flatten a matrix in Python.
Date: 2017-09-02 12:00
Category: Machine Learning
Tags: Vectors Matrices Arrays
Authors: Chris Albon
Preliminaries
End of explanation
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Explanation: Create Matrix
End of explanation
# Flatten matrix
matrix.flatten()
Explanation: Flatten Matrix
End of explanation |
738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
min_data = np.min(x)
max_data = np.max(x)
normalize_data = (x - min_data) / (max_data - min_data)
return normalize_data
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
num = len(x)
label = np.zeros((num, 10))
label[np.arange(num), x] = 1
return label
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
width, height, depth = image_shape
return tf.placeholder(tf.float32, shape=(None, width, height, depth), name = "x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, n_classes), name = "y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name = "keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
num, width, height, depth = x_tensor.get_shape().as_list()
conv_width, conv_height = conv_ksize
conv_strides_width, conv_strides_height = conv_strides
pool_width, pool_height = pool_ksize
pool_strides_width, pool_strides_height = pool_strides
weight = tf.Variable(tf.truncated_normal([conv_width, conv_height, depth, conv_num_outputs], mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
x = tf.nn.conv2d(x_tensor, weight, strides = [1, conv_strides_width, conv_strides_height, 1], padding = "SAME")
x = tf.nn.bias_add(x, bias)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize = [1, pool_width, pool_height, 1], strides = [1, pool_strides_width, pool_strides_height, 1], padding = "SAME")
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
num, width, height, depth = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor, [-1, width * height * depth])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#num, dim = x_tensor.get_shape().as_list()
#weight = tf.Variable(tf.truncated_normal([dim, num_outputs], mean=0, stddev=0.1))
#bias = tf.Variable(tf.zeros(num_outputs))
#fc = tf.nn.relu(tf.add(tf.matmul(x_tensor, weight), bias))
fc = tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn = tf.nn.relu)
return fc
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#num, dim = x_tensor.get_shape().as_list()
#weight = tf.Variable(tf.truncated_normal([dim, num_outputs], mean=0, stddev=0.1))
#bias = tf.Variable(tf.zeros(num_outputs))
#output_layer = tf.add(tf.matmul(x_tensor, weight), bias)
output_layer = tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn = None)
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note: Activation, softmax, or cross entropy shouldn't be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv1 = conv2d_maxpool(x, 64, [3, 3], [1, 1], [2, 2], [2, 2])
conv2 = conv2d_maxpool(conv1, 128, [2, 2], [1, 1], [2, 2], [2, 2])
conv3 = conv2d_maxpool(conv2, 256, [2, 2], [1, 1], [2, 2], [2, 2])
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
fc1 = flatten(conv3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc1 = fully_conn(fc1, 4096)
fc2 = tf.nn.dropout(fc1, keep_prob)
fc2 = fully_conn(fc2, 4096)
fc3 = tf.nn.dropout(fc2, keep_prob)
fc3 = fully_conn(fc3, 4096)
fc3 = tf.nn.dropout(fc3, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fc3, 10)
# TODO: return output
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = sess.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 10
batch_size = 256
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 1
Step1: 11 - Using CSV Module
But there are many small things that will cause us problems if we try and write the CSV reader by ourselves. So we will re write the above using python's csv module
Step2: 12 - Intro to XLRD
This module allows us to work with Excel documents whether it is the old .xls or the new .xlsx format
We can install xlrd using
pip install xlrd
Step3: 13 - Reading Excel Files
Read the ERCOT load excel file
Find min., max. and avg. for COAST and report timestamp (Hour_End) for min. and max.
Step4: 15 - Intro to JSON
sometimes fields have nested fields
sometimes items may have different fields. sometimes optional
Resources
JSON Tutorial
http
Step5: 18 - Exploring JSON
Step6: Problem Set Starts here
Using CSV
Your task is to process the supplied file and use the csv module to extract data from it.
The data comes from NREL (National Renewable Energy Laboratory) website. Each file
contains information from one meteorological station, in particular - about amount of
solar and wind energy for each hour of day.
Note that the first line of the datafile is neither data entry, nor header. It is a line
describing the data source. You should extract the name of the station from it.
The data should be returned as a list of lists (not dictionaries).
You can use the csv modules reader method to get data in such format.
Another useful method is next() - to get the next line from the iterator.
You should only change the parse_file function.
Resources
Data comes from NREL website. The datafile in this exercise is a small subset from the full file for one of the stations. You can download it from the Downloadables section > or see the full data files for other stations on the National Solar Radiation Data Base.
Documentation on csv.reader on docs.python.org
Documentation on Reader object methods on docs.python.org
Step7: Excel to CSV
Find the time and value of max load for each of the regions
COAST, EAST, FAR_WEST, NORTH, NORTH_C, SOUTHERN, SOUTH_C, WEST
and write the result out in a csv file, using pipe character | as the delimiter.
An example output can be seen in the "example.csv" file.
Resources
See csv module documentation on how to use different delimeters for csv.writer- http
Step8: Wrangling JSON
This exercise shows some important concepts that you should be aware about | Python Code:
import os
DATA_FILE_CSV = "beatles-diskography.csv"
def parse_file(data_file):
data = []
row_count = 0
with open(data_file) as f:
header = f.readline().split(',')
for line in f:
if row_count >= 10:
break
fields = line.strip().split(',')
row = {}
for i, value in enumerate(fields):
row[header[i].strip()] = value.strip()
data.append(row)
row_count += 1
return data
d = parse_file(DATA_FILE_CSV)
d[0]
def test(data):
assert data[0] == {'BPI Certification': 'Gold',
'Label': 'Parlophone(UK)',
'RIAA Certification': 'Platinum',
'Released': '22 March 1963',
'Title': 'Please Please Me',
'UK Chart Position': '1',
'US Chart Position': '\xe2\x80\x94'}
assert data[9] == {'BPI Certification': 'Gold',
'Label': 'Parlophone(UK)',
'RIAA Certification': '',
'Released': '10 July 1964',
'Title': '',
'UK Chart Position': '1',
'US Chart Position': '\xe2\x80\x94'}
test(d)
Explanation: Lesson 1: Data Extraction Fundamentals
01 - Intro
Data Scientists spend about 70% of time data wrangling
data Wrangling is process of gathering, extracting, cleaning and storing our data
need to make sure that data is in good shape before doing any analysis
otherwise
waste lot of time
lose the faith of your colleagues
03 - Assessing the Quality of Data Pt 1
We should not trust any data as we get data
entered by a human
created by a program written by a human
04 - Assessing the Quality of Data Pt 2
We need to assess our data to
- Test assumptions about
- values
- data types
- shape
- identify errors or outliers
- find missing values
- ensure that our data will support the type of queries that we need it to make
- Eliminate any surprises later on
05 - Tabular Format
06 - CSV Format
CSV is lighweight
- Each line of text is a single row
- Fields are separated by delimiter
- stores just the data itself
- don't need special purpose software
- all spreadsheet software read/write CSV
07 - Parsing CSV Files in Python
not all spreadsheet software can handle big files
reading in csv in case the number of files is big manually is not an option
We will try and parse a CSV file as a list of dictionaries
End of explanation
import csv
def parse_csv(data_file):
data = []
with open(data_file, 'rb') as sd:
r = csv.DictReader(sd)
for line in r:
data.append(line)
return data
test(parse_csv(DATA_FILE_CSV))
Explanation: 11 - Using CSV Module
But there are many small things that will cause us problems if we try and write the CSV reader by ourselves. So we will re write the above using python's csv module
End of explanation
def read_sheet(sheet):
return [[sheet.cell_value(r, col)
for col in range(sheet.ncols)]
for r in range(sheet.nrows)]
import xlrd
DATA_FILE_EXCEL = "2013_ERCOT_Hourly_Load_Data.xls"
def parse_excel_file(datafile):
workbook = xlrd.open_workbook(datafile)
sheet = workbook.sheet_by_index(0)
data = read_sheet(sheet)
print "\nList Comprehension"
print "data[3][2]:",
print data[3][2]
print "\nCells in a nested loop:"
for row in range(sheet.nrows):
for col in range(sheet.ncols):
if row == 50:
print sheet.cell_value(row, col),
### other useful methods:
print "\nROWS, COLUMNS, and CELLS:"
print "Number of rows in the sheet:",
print sheet.nrows
print "Type of data in cell (row 3, col 2):",
print sheet.cell_type(3, 2)
print "Value in cell (row 3, col 2):",
print sheet.cell_value(3, 2)
print "Get a slice of values in column 3, from rows 1-3:"
print sheet.col_values(3, start_rowx=1, end_rowx=4)
print "\nDATES:"
print "Type of data in cell (row 1, col 0):",
print sheet.cell_type(1, 0)
exceltime = sheet.cell_value(1, 0)
print "Time in Excel format:",
print exceltime
print "Convert time to a Python datetime tuple, from the Excel float:",
print xlrd.xldate_as_tuple(exceltime, 0)
return data
data = parse_excel_file(DATA_FILE_EXCEL)
data[0:2]
Explanation: 12 - Intro to XLRD
This module allows us to work with Excel documents whether it is the old .xls or the new .xlsx format
We can install xlrd using
pip install xlrd
End of explanation
def data_for_column(sheet, column_index):
return sheet.col_values(column_index, start_rowx=1, end_rowx=None)
def row_index_for_value_in_column(data, value):
return data.index(value) + 1
def cell_value_at_position(sheet, row, col):
return sheet.cell_value(row, col)
def parse_excel_date(excel_date):
return xlrd.xldate_as_tuple(excel_date, 0)
def get_date_for_row_containing_value(sheet, column_data, value):
index = row_index_for_value_in_column(column_data, value)
date = cell_value_at_position(sheet, index, 0)
result = parse_excel_date(date)
return result
def parse_file_13(datafile):
workbook = xlrd.open_workbook(datafile)
sheet = workbook.sheet_by_index(0)
sheet_data = read_sheet(sheet)
cv = data_for_column(sheet, 1)
max_data = max(cv)
min_data = min(cv)
#print sheet_data
data = {
'maxtime': get_date_for_row_containing_value(sheet, cv, max_data),
'maxvalue': max_data,
'mintime': get_date_for_row_containing_value(sheet, cv, min_data),
'minvalue': min_data,
'avgcoast': sum(cv) / float(len(cv))
}
return data
import pprint
data = parse_file_13(DATA_FILE_EXCEL)
pprint.pprint(data)
assert data['maxtime'] == (2013, 8, 13, 17, 0, 0)
assert round(data['maxvalue'], 10) == round(18779.02551, 10)
Explanation: 13 - Reading Excel Files
Read the ERCOT load excel file
Find min., max. and avg. for COAST and report timestamp (Hour_End) for min. and max.
End of explanation
import json
import requests
BASE_URL = "http://musicbrainz.org/ws/2/"
ARTIST_URL = BASE_URL + "artist/"
# query parameters are given to the requests.get function as a dictionary; this
# variable contains some starter parameters.
query_type = { "simple": {},
"atr": {"inc": "aliases+tags+ratings"},
"aliases": {"inc": "aliases"},
"releases": {"inc": "releases"}}
def query_site(url, params, uid="", fmt="json"):
# This is the main function for making queries to the musicbrainz API.
# A json document should be returned by the query.
params["fmt"] = fmt
r = requests.get(url + uid, params=params)
print "requesting", r.url
if r.status_code == requests.codes.ok:
return r.json()
else:
r.raise_for_status()
def query_by_name(url, params, name):
# This adds an artist name to the query parameters before making
# an API call to the function above.
params["query"] = "artist:" + name
return query_site(url, params)
def pretty_print(data, indent=4):
# After we get our output, we can format it to be more readable
# by using this function.
if type(data) == dict:
print json.dumps(data, indent=indent, sort_keys=True)
else:
print data
def json_play():
'''
Modify the function calls and indexing below to answer the questions on
the next quiz. HINT: Note how the output we get from the site is a
multi-level JSON document, so try making print statements to step through
the structure one level at a time or copy the output to a separate output
file.
'''
results = query_by_name(ARTIST_URL, query_type["simple"], "Nirvana")
print "All Results for Nirvana"
pretty_print(results)
artist_id = results["artists"][1]["id"]
print "\nARTIST:"
pretty_print(results["artists"][1])
artist_data = query_site(ARTIST_URL, query_type["releases"], artist_id)
releases = artist_data["releases"]
print "\nONE RELEASE:"
pretty_print(releases[0], indent=2)
release_titles = [r["title"] for r in releases]
print "\nALL TITLES:"
for t in release_titles:
print t
json_play()
Explanation: 15 - Intro to JSON
sometimes fields have nested fields
sometimes items may have different fields. sometimes optional
Resources
JSON Tutorial
http://www.json.org/
17 - JSON Playground
End of explanation
def is_group(artist):
return 'type' in artist and artist['type'].lower() == 'group'
def has_same_name(artist, name):
return artist['name'].lower() == name.lower()
def band_by_name(name):
results = query_by_name(ARTIST_URL, query_type["simple"], name)
return filter(lambda x: is_group(x) and has_same_name(x, name), results['artists'])
#number of bands with the name
len(band_by_name("FIRST AID KIT"))
#Name of Queen's begin area name
band_by_name("queen")[0]['begin-area']['name']
#Spanish alias for the beatles
all_aliases = band_by_name('the beatles')[0]['aliases']
filter(lambda x: x['locale'] == 'es', all_aliases)[0]['name']
#disambiguation for nirvana
filter(lambda x: x['country'] == 'US',band_by_name('nirvana'))[0]['disambiguation']
#When was one direction formed?
band_by_name('one direction')[0]['life-span']['begin']
Explanation: 18 - Exploring JSON
End of explanation
import csv
import os
DATA_DIR = ""
DATA_FILE = "745090.csv"
def parse_file(datafile):
name = ""
data = []
with open(datafile, 'rb') as f:
reader = csv.reader(f)
for i, row in enumerate(reader):
if i == 0:
name = row[1]
elif i == 1:
pass
else:
data.append(row)
# Do not change the line below
return name, data
def test1():
datafile = os.path.join(DATA_DIR, DATA_FILE)
name, data = parse_file(datafile)
assert name == "MOUNTAIN VIEW MOFFETT FLD NAS"
assert data[0][1] == "01:00"
assert data[2][0] == "01/01/2005"
assert data[2][5] == "2"
test1()
Explanation: Problem Set Starts here
Using CSV
Your task is to process the supplied file and use the csv module to extract data from it.
The data comes from NREL (National Renewable Energy Laboratory) website. Each file
contains information from one meteorological station, in particular - about amount of
solar and wind energy for each hour of day.
Note that the first line of the datafile is neither data entry, nor header. It is a line
describing the data source. You should extract the name of the station from it.
The data should be returned as a list of lists (not dictionaries).
You can use the csv modules reader method to get data in such format.
Another useful method is next() - to get the next line from the iterator.
You should only change the parse_file function.
Resources
Data comes from NREL website. The datafile in this exercise is a small subset from the full file for one of the stations. You can download it from the Downloadables section > or see the full data files for other stations on the National Solar Radiation Data Base.
Documentation on csv.reader on docs.python.org
Documentation on Reader object methods on docs.python.org
End of explanation
import xlrd
import os
import csv
DATA_FILE = "2013_ERCOT_Hourly_Load_Data.xls"
OUT_FILE = "2013_Max_Loads.csv"
def get_max_and_max_date_for_column(sheet, column_index):
data = data_for_column(sheet, column_index)
max_data = max(data)
date = get_date_for_row_containing_value(sheet, data, max_data)
return max_data, date
def parse_file(datafile):
workbook = xlrd.open_workbook(datafile)
sheet = workbook.sheet_by_index(0)
return {
'COAST': get_max_and_max_date_for_column(sheet, 1),
'EAST': get_max_and_max_date_for_column(sheet, 2),
'FAR_WEST': get_max_and_max_date_for_column(sheet, 3),
'NORTH': get_max_and_max_date_for_column(sheet, 4),
'NORTH_C': get_max_and_max_date_for_column(sheet, 5),
'SOUTHERN': get_max_and_max_date_for_column(sheet, 6),
'SOUTH_C': get_max_and_max_date_for_column(sheet, 7),
'WEST': get_max_and_max_date_for_column(sheet, 8)
}
def save_file(data, filename):
result = ""
with open(filename, 'w') as f:
result += "Station|Year|Month|Day|Hour|Max Load\n"
for key, value in data.iteritems():
result += "{}|{}|{}|{}|{}|{}\n".format(
key, value[1][0], value[1][1], value[1][2], value[1][3], value[0])
result = result.strip("\n")
f.write(result)
def test2():
# open_zip(DATA_FILE)
data = parse_file(DATA_FILE)
save_file(data, OUT_FILE)
number_of_rows = 0
stations = []
ans = {'FAR_WEST': {'Max Load': '2281.2722140000024',
'Year': '2013',
'Month': '6',
'Day': '26',
'Hour': '17'}}
correct_stations = ['COAST', 'EAST', 'FAR_WEST', 'NORTH',
'NORTH_C', 'SOUTHERN', 'SOUTH_C', 'WEST']
fields = ['Year', 'Month', 'Day', 'Hour', 'Max Load']
with open(OUT_FILE) as of:
csvfile = csv.DictReader(of, delimiter="|")
for line in csvfile:
station = line['Station']
if station == 'FAR_WEST':
for field in fields:
# Check if 'Max Load' is within .1 of answer
if field == 'Max Load':
max_answer = round(float(ans[station][field]), 1)
max_line = round(float(line[field]), 1)
assert max_answer == max_line
# Otherwise check for equality
else:
assert ans[station][field] == line[field]
number_of_rows += 1
stations.append(station)
# Output should be 8 lines not including header
assert number_of_rows == 8
# Check Station Names
assert set(stations) == set(correct_stations)
test2()
Explanation: Excel to CSV
Find the time and value of max load for each of the regions
COAST, EAST, FAR_WEST, NORTH, NORTH_C, SOUTHERN, SOUTH_C, WEST
and write the result out in a csv file, using pipe character | as the delimiter.
An example output can be seen in the "example.csv" file.
Resources
See csv module documentation on how to use different delimeters for csv.writer- http://docs.python.org/2/library/csv.html
End of explanation
import json
import codecs
import requests
URL_MAIN = "http://api.nytimes.com/svc/"
URL_POPULAR = URL_MAIN + "mostpopular/v2/"
API_KEY = { "popular": "",
"article": ""}
def get_from_file(kind, period):
filename = "popular-{0}-{1}.json".format(kind, period)
with open(filename, "r") as f:
return json.loads(f.read())
def article_overview(kind, period):
data = get_from_file(kind, period)
titles = []
urls = []
for row in data:
titles.append({row['section']: row['title']})
for media in row['media']:
for metadata in media['media-metadata']:
if metadata['format'] == 'Standard Thumbnail':
urls.append(metadata['url'])
return titles, urls
def query_site(url, target, offset):
# This will set up the query with the API key and offset
# Web services often use offset paramter to return data in small chunks
# NYTimes returns 20 articles per request, if you want the next 20
# You have to provide the offset parameter
if API_KEY["popular"] == "" or API_KEY["article"] == "":
print "You need to register for NYTimes Developer account to run this program."
print "See Intructor notes for information"
return False
params = {"api-key": API_KEY[target], "offset": offset}
r = requests.get(url, params = params)
if r.status_code == requests.codes.ok:
return r.json()
else:
r.raise_for_status()
def get_popular(url, kind, days, section="all-sections", offset=0):
# This function will construct the query according to the requirements of the site
# and return the data, or print an error message if called incorrectly
if days not in [1, 7, 30]:
print "Time period can be 1,7, 30 days only"
return False
if kind not in ["viewed", "shared", "emailed"]:
print "kind can be only one of viewed/shared/emailed"
return False
url += "most{0}/{1}/{2}.json".format(kind, section, days)
data = query_site(url, "popular", offset)
return data
def save_file(kind, period):
# This will process all results, by calling the API repeatedly with supplied offset value,
# combine the data and then write all results in a file.
data = get_popular(URL_POPULAR, "viewed", 1)
num_results = data["num_results"]
full_data = []
with codecs.open("popular-{0}-{1}.json".format(kind, period), encoding='utf-8', mode='w') as v:
for offset in range(0, num_results, 20):
data = get_popular(URL_POPULAR, kind, period, offset=offset)
full_data += data["results"]
v.write(json.dumps(full_data, indent=2))
def test3():
titles, urls = article_overview("viewed", 1)
assert len(titles) == 20
assert len(urls) == 30
assert titles[2] == {'Opinion': 'Professors, We Need You!'}
assert urls[20] == 'http://graphics8.nytimes.com/images/2014/02/17/sports/ICEDANCE/ICEDANCE-thumbStandard.jpg'
test3()
Explanation: Wrangling JSON
This exercise shows some important concepts that you should be aware about:
- using codecs module to write unicode files
- using authentication with web APIs
- using offset when accessing web APIs
To run this code locally you have to register at the NYTimes developer site
and get your own API key. You will be able to complete this exercise in our UI
without doing so, as we have provided a sample result.
Your task is to process the saved file that represents the most popular
articles (by view count) from the last day, and return the following data:
- list of dictionaries, where the dictionary key is "section" and value is "title"
- list of URLs for all media entries with "format": "Standard Thumbnail"
All your changes should be in the article_overview function.
The rest of functions are provided for your convenience, if you want to access
the API by yourself.
If you want to know more, or query the site by yourself, please read the NYTimes Developer Documentation for the Most Popular API and apply for your own API Key for NY Times.
End of explanation |
740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Rabbit example
Copyright 2017 Allen Downey
License
Step1: Rabbit is Rich
This notebook starts with a version of the rabbit population growth model. You will modify it using some of the tools in Chapter 5. Before you attempt this diagnostic, you should have a good understanding of State objects, as presented in Section 5.4. And you should understand the version of run_simulation in Section 5.7.
Separating the State from the System
Here's the System object from the previous diagnostic. Notice that it includes system parameters, which don't change while the simulation is running, and population variables, which do. We're going to improve that by pulling the population variables into a State object.
Step2: In the following cells, define a State object named init that contains two state variables, juveniles and adults, with initial values 0 and 10. Make a version of the System object that does NOT contain juvenile_pop0 and adult_pop0, but DOES contain init.
Step4: Updating run_simulation
Here's the version of run_simulation from last time
Step6: In the cell below, write a version of run_simulation that works with the new System object (the one that contains a State object named init).
Hint
Step7: Test your changes in run_simulation
Step9: Plotting the results
Here's a version of plot_results that plots both the adult and juvenile TimeSeries.
Step10: If your changes in the previous section were successful, you should be able to run this new version of plot_results.
Step13: That's the end of the diagnostic. If you were able to get it done quickly, and you would like a challenge, here are two bonus questions
Step16: Bonus question #2
Factor out the update function.
Write a function called update that takes a State object and a System object and returns a new State object that represents the state of the system after one time step.
Write a version of run_simulation that takes an update function as a parameter and uses it to compute the update.
Run your new version of run_simulation and plot the results.
WARNING | Python Code:
%matplotlib inline
from modsim import *
Explanation: Modeling and Simulation in Python
Rabbit example
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
system = System(t0 = 0,
t_end = 20,
juvenile_pop0 = 0,
adult_pop0 = 10,
birth_rate = 0.9,
mature_rate = 0.33,
death_rate = 0.5)
system
Explanation: Rabbit is Rich
This notebook starts with a version of the rabbit population growth model. You will modify it using some of the tools in Chapter 5. Before you attempt this diagnostic, you should have a good understanding of State objects, as presented in Section 5.4. And you should understand the version of run_simulation in Section 5.7.
Separating the State from the System
Here's the System object from the previous diagnostic. Notice that it includes system parameters, which don't change while the simulation is running, and population variables, which do. We're going to improve that by pulling the population variables into a State object.
End of explanation
# Solution
init = State(juveniles=0, adults=10)
init
# Solution
system = System(t0 = 0,
t_end = 20,
init = init,
birth_rate = 0.9,
mature_rate = 0.33,
death_rate = 0.5)
system
Explanation: In the following cells, define a State object named init that contains two state variables, juveniles and adults, with initial values 0 and 10. Make a version of the System object that does NOT contain juvenile_pop0 and adult_pop0, but DOES contain init.
End of explanation
def run_simulation(system):
Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
juveniles = TimeSeries()
juveniles[system.t0] = system.juvenile_pop0
adults = TimeSeries()
adults[system.t0] = system.adult_pop0
for t in linrange(system.t0, system.t_end):
maturations = system.mature_rate * juveniles[t]
births = system.birth_rate * adults[t]
deaths = system.death_rate * adults[t]
if adults[t] > 30:
market = adults[t] - 30
else:
market = 0
juveniles[t+1] = juveniles[t] + births - maturations
adults[t+1] = adults[t] + maturations - deaths - market
system.adults = adults
system.juveniles = juveniles
Explanation: Updating run_simulation
Here's the version of run_simulation from last time:
End of explanation
# Solution
def run_simulation(system):
Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
juveniles = TimeSeries()
juveniles[system.t0] = system.init.juveniles
adults = TimeSeries()
adults[system.t0] = system.init.adults
for t in linrange(system.t0, system.t_end):
maturations = system.mature_rate * juveniles[t]
births = system.birth_rate * adults[t]
deaths = system.death_rate * adults[t]
if adults[t] > 30:
market = adults[t] - 30
else:
market = 0
juveniles[t+1] = juveniles[t] + births - maturations
adults[t+1] = adults[t] + maturations - deaths - market
system.adults = adults
system.juveniles = juveniles
Explanation: In the cell below, write a version of run_simulation that works with the new System object (the one that contains a State object named init).
Hint: you only have to change two lines.
End of explanation
run_simulation(system)
system.adults
Explanation: Test your changes in run_simulation:
End of explanation
def plot_results(system, title=None):
Plot the estimates and the model.
system: System object with `results`
newfig()
plot(system.adults, 'bo-', label='adults')
plot(system.juveniles, 'gs-', label='juveniles')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
Explanation: Plotting the results
Here's a version of plot_results that plots both the adult and juvenile TimeSeries.
End of explanation
plot_results(system, title='Proportional growth model')
Explanation: If your changes in the previous section were successful, you should be able to run this new version of plot_results.
End of explanation
# Solution
def run_simulation(system):
Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
results = TimeFrame(columns = system.init.index)
results.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
juveniles, adults = results.loc[t]
maturations = system.mature_rate * juveniles
births = system.birth_rate * adults
deaths = system.death_rate * adults
if adults > 30:
market = adults - 30
else:
market = 0
juveniles += births - maturations
adults += maturations - deaths - market
results.loc[t+1] = juveniles, adults
system.results = results
run_simulation(system)
# Solution
def plot_results(system, title=None):
Plot the estimates and the model.
system: System object with `results`
newfig()
plot(system.results.adults, 'bo-', label='adults')
plot(system.results.juveniles, 'gs-', label='juveniles')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
plot_results(system)
Explanation: That's the end of the diagnostic. If you were able to get it done quickly, and you would like a challenge, here are two bonus questions:
Bonus question #1
Write a version of run_simulation that puts the results into a single TimeFrame named results, rather than two TimeSeries objects.
Write a version of plot_results that can plot the results in this form.
WARNING: This question is substantially harder, and requires you to have a good understanding of everything in Chapter 5. We don't expect most people to be able to do this exercise at this point.
End of explanation
# Solution
def update(state, system):
Compute the state of the system after one time step.
state: State object with juveniles and adults
system: System object
returns: State object
juveniles, adults = state
maturations = system.mature_rate * juveniles
births = system.birth_rate * adults
deaths = system.death_rate * adults
if adults > 30:
market = adults - 30
else:
market = 0
juveniles += births - maturations
adults += maturations - deaths - market
return State(juveniles=juveniles, adults=adults)
def run_simulation(system, update_func):
Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
results = TimeFrame(columns = system.init.index)
results.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
results.loc[t+1] = update_func(results.loc[t], system)
system.results = results
run_simulation(system, update)
plot_results(system)
Explanation: Bonus question #2
Factor out the update function.
Write a function called update that takes a State object and a System object and returns a new State object that represents the state of the system after one time step.
Write a version of run_simulation that takes an update function as a parameter and uses it to compute the update.
Run your new version of run_simulation and plot the results.
WARNING: This question is substantially harder, and requires you to have a good understanding of everything in Chapter 5. We don't expect most people to be able to do this exercise at this point.
End of explanation |
741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-LL
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="img/scikit-learn-logo.png" width="40%" />
<br />
<h1>Robust and calibrated estimators with Scikit-Learn</h1>
<br /><br />
Gilles Louppe (<a href="https
Step1: Motivation
In theory,
- Samples $x$ are drawn from a distribution $P$;
- As data increases, convergence towards the optimal model is guaranteed.
In practice,
- A few samples may be distant from other samples
Step2: Ensembling for robustness
Bias-variance decomposition
Theorem. For the squared error loss, the bias-variance decomposition of the expected
generalization error at $X=\mathbf{x}$ is
$$
\mathbb{E}{\cal L} { Err(\varphi{\cal L}(\mathbf{x})) } = \text{noise}(\mathbf{x}) + \text{bias}^2(\mathbf{x}) + \text{var}(\mathbf{x})
$$
<center>
<img src="img/bv.png" width="50%" />
</center>
Variance and robustness
Low variance implies robustness to outliers
High variance implies sensitivity to data pecularities
Ensembling reduces variance
Theorem. For the squared error loss, the bias-variance decomposition of the expected generalization error at $X=x$ of an ensemble of $M$ randomized models $\varphi_{{\cal L},\theta_m}$ is
$$
\mathbb{E}{\cal L} { Err(\psi{{\cal L},\theta_1,\dots,\theta_M}(\mathbf{x})) } = \text{noise}(\mathbf{x}) + \text{bias}^2(\mathbf{x}) + \text{var}(\mathbf{x})
$$
where
\begin{align}
\text{noise}(\mathbf{x}) &= Err(\varphi_B(\mathbf{x})), \
\text{bias}^2(\mathbf{x}) &= (\varphi_B(\mathbf{x}) - \mathbb{E}{{\cal L},\theta} { \varphi{{\cal L},\theta}(\mathbf{x}) } )^2, \
\text{var}(\mathbf{x}) &= \rho(\mathbf{x}) \sigma^2_{{\cal L},\theta}(\mathbf{x}) + \frac{1 - \rho(\mathbf{x})}{M} \sigma^2_{{\cal L},\theta}(\mathbf{x}).
\end{align}
Step3: From least squares to least absolute deviances
Robust learning
Most methods minimize the mean squared error $\frac{1}{N} \sum_i (y_i - \varphi(x_i))^2$
By definition, squaring residuals gives emphasis to large residuals.
Outliers are thus very likely to have a significant effect.
A robust alternative is to minimize instead the mean absolute deviation $\frac{1}{N} \sum_i |y_i - \varphi(x_i)|$
Large residuals are therefore given much less emphasis.
Step4: Robust scaling
Standardization of a dataset is a common requirement for many machine learning estimators.
Typically this is done by removing the mean and scaling to unit variance.
For similar reasons as before, outliers can influence the sample mean / variance in a negative way.
In such cases, the median and the interquartile range often give better results.
Step5: Calibration
In classification, you often want to predict not only the class label, but also the associated probability.
However, not all classifiers provide well-calibrated probabilities.
Thus, a separate calibration of predicted probabilities is often desirable as a postprocessing
Step6: Summary
For robust and calibrated estimators | Python Code:
# Global imports and settings
# Matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["figure.max_open_warning"] = -1
# Print options
import numpy as np
np.set_printoptions(precision=3)
# Slideshow
from notebook.services.config import ConfigManager
cm = ConfigManager()
cm.update('livereveal', {'width': 1440, 'height': 768, 'scroll': True, 'theme': 'simple'})
# Silence warnings
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=UserWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
# Utils
from robustness import plot_surface
from robustness import plot_outlier_detector
%%javascript
Reveal.addEventListener("slidechanged", function(event){ window.location.hash = "header"; });
Explanation: <center>
<img src="img/scikit-learn-logo.png" width="40%" />
<br />
<h1>Robust and calibrated estimators with Scikit-Learn</h1>
<br /><br />
Gilles Louppe (<a href="https://twitter.com/glouppe">@glouppe</a>)
<br /><br />
New York University
</center>
End of explanation
# Unsupervised learning
estimator.fit(X_train) # no "y_train"
# Detecting novelty or outliers
y_pred = estimator.predict(X_test) # inliers == 1, outliers == -1
y_score = estimator.decision_function(X_test) # outliers == highest scores
# Generate data
from sklearn.datasets import make_blobs
inliers, _ = make_blobs(n_samples=200, centers=2, random_state=1)
outliers = np.random.rand(50, 2)
outliers = np.min(inliers, axis=0) + (np.max(inliers, axis=0) - np.min(inliers, axis=0)) * outliers
X = np.vstack((inliers, outliers))
ground_truth = np.ones(len(X), dtype=np.int)
ground_truth[-len(outliers):] = 0
from sklearn.svm import OneClassSVM
from sklearn.covariance import EllipticEnvelope
from sklearn.ensemble import IsolationForest
# Unsupervised learning
estimator = OneClassSVM(nu=0.4, kernel="rbf", gamma=0.1)
# clf = EllipticEnvelope(contamination=.1)
# clf = IsolationForest(max_samples=100)
estimator.fit(X)
plot_outlier_detector(estimator, X, ground_truth)
Explanation: Motivation
In theory,
- Samples $x$ are drawn from a distribution $P$;
- As data increases, convergence towards the optimal model is guaranteed.
In practice,
- A few samples may be distant from other samples:
- either because they correspond to rare observations,
- or because they are due to experimental errors;
- Because data is finite, outliers might strongly affect the resulting model.
Today's goal: build models that are robust to outliers!
Outline
Motivation
Novelty and anomaly detection
Ensembling for robustness
From least squares to least absolute deviances
Calibration
Novelty and anomaly detection
Novelty detection:
- Training data is not polluted by outliers, and we are interested in detecting anomalies in new observations.
Outlier detection:
- Training data contains outliers, and we need to fit the central mode of the training data, ignoring the deviant observations.
API
End of explanation
# Load data
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:, [0, 1]]
y = iris.target
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier().fit(X, y)
plot_surface(clf, X, y)
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100).fit(X, y)
plot_surface(clf, X, y)
Explanation: Ensembling for robustness
Bias-variance decomposition
Theorem. For the squared error loss, the bias-variance decomposition of the expected
generalization error at $X=\mathbf{x}$ is
$$
\mathbb{E}{\cal L} { Err(\varphi{\cal L}(\mathbf{x})) } = \text{noise}(\mathbf{x}) + \text{bias}^2(\mathbf{x}) + \text{var}(\mathbf{x})
$$
<center>
<img src="img/bv.png" width="50%" />
</center>
Variance and robustness
Low variance implies robustness to outliers
High variance implies sensitivity to data pecularities
Ensembling reduces variance
Theorem. For the squared error loss, the bias-variance decomposition of the expected generalization error at $X=x$ of an ensemble of $M$ randomized models $\varphi_{{\cal L},\theta_m}$ is
$$
\mathbb{E}{\cal L} { Err(\psi{{\cal L},\theta_1,\dots,\theta_M}(\mathbf{x})) } = \text{noise}(\mathbf{x}) + \text{bias}^2(\mathbf{x}) + \text{var}(\mathbf{x})
$$
where
\begin{align}
\text{noise}(\mathbf{x}) &= Err(\varphi_B(\mathbf{x})), \
\text{bias}^2(\mathbf{x}) &= (\varphi_B(\mathbf{x}) - \mathbb{E}{{\cal L},\theta} { \varphi{{\cal L},\theta}(\mathbf{x}) } )^2, \
\text{var}(\mathbf{x}) &= \rho(\mathbf{x}) \sigma^2_{{\cal L},\theta}(\mathbf{x}) + \frac{1 - \rho(\mathbf{x})}{M} \sigma^2_{{\cal L},\theta}(\mathbf{x}).
\end{align}
End of explanation
# Generate data
from sklearn.datasets import make_regression
n_outliers = 3
X, y, coef = make_regression(n_samples=100, n_features=1, n_informative=1, noise=10,
coef=True, random_state=0)
np.random.seed(1)
X[-n_outliers:] = 1 + 0.25 * np.random.normal(size=(n_outliers, 1))
y[-n_outliers:] = -100 + 10 * np.random.normal(size=n_outliers)
plt.scatter(X[:-n_outliers], y[:-n_outliers], color="b")
plt.scatter(X[-n_outliers:], y[-n_outliers:], color="r")
plt.xlim(-3, 3)
plt.ylim(-150, 120)
plt.show()
# Fit with least squares vs. least absolute deviances
from sklearn.ensemble import GradientBoostingRegressor
clf_ls = GradientBoostingRegressor(loss="ls")
clf_lad = GradientBoostingRegressor(loss="lad")
clf_ls.fit(X, y)
clf_lad.fit(X, y)
# Plot
X_test = np.linspace(-5, 5).reshape(-1, 1)
plt.scatter(X[:-n_outliers], y[:-n_outliers], color="b")
plt.scatter(X[-n_outliers:], y[-n_outliers:], color="r")
plt.plot(X_test, clf_ls.predict(X_test), "g", label="Least squares")
plt.plot(X_test, clf_lad.predict(X_test), "y", label="Lead absolute deviances")
plt.xlim(-3, 3)
plt.ylim(-150, 120)
plt.legend()
plt.show()
Explanation: From least squares to least absolute deviances
Robust learning
Most methods minimize the mean squared error $\frac{1}{N} \sum_i (y_i - \varphi(x_i))^2$
By definition, squaring residuals gives emphasis to large residuals.
Outliers are thus very likely to have a significant effect.
A robust alternative is to minimize instead the mean absolute deviation $\frac{1}{N} \sum_i |y_i - \varphi(x_i)|$
Large residuals are therefore given much less emphasis.
End of explanation
# Generate data
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
X, y = make_blobs(n_samples=100, centers=[(0, 0), (-1, 0)], random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
X_train[0, 0] = -1000 # a fairly large outlier
# Scale data
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
standard_scaler = StandardScaler()
Xtr_s = standard_scaler.fit_transform(X_train)
Xte_s = standard_scaler.transform(X_test)
robust_scaler = RobustScaler()
Xtr_r = robust_scaler.fit_transform(X_train)
Xte_r = robust_scaler.transform(X_test)
# Plot data
fig, ax = plt.subplots(1, 3, figsize=(12, 4))
ax[0].scatter(X_train[:, 0], X_train[:, 1], color=np.where(y_train == 0, 'r', 'b'))
ax[1].scatter(Xtr_s[:, 0], Xtr_s[:, 1], color=np.where(y_train == 0, 'r', 'b'))
ax[2].scatter(Xtr_r[:, 0], Xtr_r[:, 1], color=np.where(y_train == 0, 'r', 'b'))
ax[0].set_title("Unscaled data")
ax[1].set_title("After standard scaling (zoomed in)")
ax[2].set_title("After robust scaling (zoomed in)")
# for the scaled data, we zoom in to the data center (outlier can't be seen!)
for a in ax[1:]:
a.set_xlim(-3, 3)
a.set_ylim(-3, 3)
plt.show()
# Classify using kNN
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(Xtr_s, y_train)
acc_s = knn.score(Xte_s, y_test)
print("Test set accuracy using standard scaler: %.3f" % acc_s)
knn.fit(Xtr_r, y_train)
acc_r = knn.score(Xte_r, y_test)
print("Test set accuracy using robust scaler: %.3f" % acc_r)
Explanation: Robust scaling
Standardization of a dataset is a common requirement for many machine learning estimators.
Typically this is done by removing the mean and scaling to unit variance.
For similar reasons as before, outliers can influence the sample mean / variance in a negative way.
In such cases, the median and the interquartile range often give better results.
End of explanation
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
# Generate 3 blobs with 2 classes where the second blob contains
# half positive samples and half negative samples. Probability in this
# blob is therefore 0.5.
X, y = make_blobs(n_samples=10000, n_features=2, cluster_std=1.0,
centers=[(-5, -5), (0, 0), (5, 5)], shuffle=False)
y[:len(X) // 2] = 0
y[len(X) // 2:] = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
# Plot
for this_y, color in zip([0, 1], ["r", "b"]):
this_X = X_train[y_train == this_y]
plt.scatter(this_X[:, 0], this_X[:, 1], c=color, alpha=0.2, label="Class %s" % this_y)
plt.legend(loc="best")
plt.title("Data")
plt.show()
from sklearn.naive_bayes import GaussianNB
from sklearn.calibration import CalibratedClassifierCV
# Without calibration
clf = GaussianNB()
clf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# With isotonic calibration
clf_isotonic = CalibratedClassifierCV(clf, cv=2, method='isotonic')
clf_isotonic.fit(X_train, y_train)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
# Plot
order = np.lexsort((prob_pos_clf, ))
plt.plot(prob_pos_clf[order], 'r', label='No calibration')
plt.plot(prob_pos_isotonic[order], 'b', label='Isotonic calibration')
plt.plot(np.linspace(0, y_test.size, 51)[1::2], y_test[order].reshape(25, -1).mean(1), 'k--', label=r'Empirical')
plt.xlabel("Instances sorted according to predicted probability "
"(uncalibrated GNB)")
plt.ylabel("P(y=1)")
plt.legend(loc="upper left")
plt.title("Gaussian naive Bayes probabilities")
plt.ylim([-0.05, 1.05])
plt.show()
Explanation: Calibration
In classification, you often want to predict not only the class label, but also the associated probability.
However, not all classifiers provide well-calibrated probabilities.
Thus, a separate calibration of predicted probabilities is often desirable as a postprocessing
End of explanation
questions?
Explanation: Summary
For robust and calibrated estimators:
- remove outliers before training;
- reduce variance by ensembling estimators;
- drive your analysis with loss functions that are robust to outliers;
- avoid the squared error loss!
- calibrate the output of your classifier if probabilities are important for your problem.
End of explanation |
743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a Neural Network Model to Classify Images
Learning Objectives
Undersand how to read and display image data
Pre-process image data
Build, compile, and train a neural network model
Make and verify predictions
Introduction
This lab trains a neural network model to classify images of clothing, such as sneakers and shirts. You will learn how to read and display image data, pre-process image data, build, compile, and train a neural network model, and make and verify predictions
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Import the Fashion MNIST dataset
This lab uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here
Step2: Loading the dataset returns four NumPy arrays
Step3: Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels
Step4: Likewise, there are 60,000 labels in the training set
Step5: Each label is an integer between 0 and 9
Step6: There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels
Step7: And the test set contains 10,000 images labels
Step8: Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255
Step9: Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the training set and the testing set be preprocessed in the same way
Step10: To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the training set and display the class name below each image.
Step11: Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
Set up the layers
The basic building block of a neural network is the layer. Layers extract representations from the data fed into them. Hopefully, these representations are meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, such as tf.keras.layers.Dense, have parameters that are learned during training.
Step12: The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two tf.keras.layers.Dense layers. These are densely connected, or fully connected, neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer returns a logits array with length of 10. Each node contains a score that indicates the current image belongs to one of the 10 classes.
Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's compile step
Step13: Train the model
Training the neural network model requires the following steps
Step14: As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.91 (or 91%) on the training data.
Evaluate accuracy
Next, compare how the model performs on the test dataset
Step15: It turns out that the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy represents overfitting. Overfitting happens when a machine learning model performs worse on new, previously unseen inputs than it does on the training data. An overfitted model "memorizes" the noise and details in the training dataset to a point where it negatively impacts the performance of the model on the new data. For more information, see the following
Step16: Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction
Step17: A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value
Step18: So, the model is most confident that this image is an ankle boot, or class_names[9]. Examining the test label shows that this classification is correct
Step19: Graph this to look at the full set of 10 class predictions.
Step20: Verify predictions
With the model trained, you can use it to make predictions about some images.
Let's look at the 0th image, predictions, and prediction array. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label.
Step21: Let's plot several images with their predictions. Note that the model can be wrong even when very confident.
Step22: Use the trained model
Finally, use the trained model to make a prediction about a single image.
Step23: tf.keras models are optimized to make predictions on a batch, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list
Step24: Now predict the correct label for this image
Step25: keras.Model.predict returns a list of lists—one list for each image in the batch of data. Grab the predictions for our (only) image in the batch | Python Code:
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
Explanation: Train a Neural Network Model to Classify Images
Learning Objectives
Undersand how to read and display image data
Pre-process image data
Build, compile, and train a neural network model
Make and verify predictions
Introduction
This lab trains a neural network model to classify images of clothing, such as sneakers and shirts. You will learn how to read and display image data, pre-process image data, build, compile, and train a neural network model, and make and verify predictions
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Explanation: Import the Fashion MNIST dataset
This lab uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing you'll use here.
This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
Here, 60,000 images are used to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow:
End of explanation
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
Explanation: Loading the dataset returns four NumPy arrays:
The train_images and train_labels arrays are the training set—the data the model uses to learn.
The model is tested against the test set, the test_images, and test_labels arrays.
The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The labels are an array of integers, ranging from 0 to 9. These correspond to the class of clothing the image represents:
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Each image is mapped to a single label. Since the class names are not included with the dataset, store them here to use later when plotting the images:
End of explanation
train_images.shape
Explanation: Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:
End of explanation
len(train_labels)
Explanation: Likewise, there are 60,000 labels in the training set:
End of explanation
train_labels
Explanation: Each label is an integer between 0 and 9:
End of explanation
test_images.shape
Explanation: There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:
End of explanation
len(test_labels)
Explanation: And the test set contains 10,000 images labels:
End of explanation
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
Explanation: Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:
End of explanation
train_images = train_images / 255.0
test_images = test_images / 255.0
Explanation: Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the training set and the testing set be preprocessed in the same way:
End of explanation
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
Explanation: To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the training set and display the class name below each image.
End of explanation
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
Explanation: Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
Set up the layers
The basic building block of a neural network is the layer. Layers extract representations from the data fed into them. Hopefully, these representations are meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, such as tf.keras.layers.Dense, have parameters that are learned during training.
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two tf.keras.layers.Dense layers. These are densely connected, or fully connected, neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer returns a logits array with length of 10. Each node contains a score that indicates the current image belongs to one of the 10 classes.
Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's compile step:
Loss function —This measures how accurate the model is during training. You want to minimize this function to "steer" the model in the right direction.
Optimizer —This is how the model is updated based on the data it sees and its loss function.
Metrics —Used to monitor the training and testing steps. The following example uses accuracy, the fraction of the images that are correctly classified.
End of explanation
model.fit(train_images, train_labels, epochs=10)
Explanation: Train the model
Training the neural network model requires the following steps:
Feed the training data to the model. In this example, the training data is in the train_images and train_labels arrays.
The model learns to associate images and labels.
You ask the model to make predictions about a test set—in this example, the test_images array.
Verify that the predictions match the labels from the test_labels array.
Feed the model
To start training, call the model.fit method—so called because it "fits" the model to the training data:
End of explanation
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
Explanation: As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.91 (or 91%) on the training data.
Evaluate accuracy
Next, compare how the model performs on the test dataset:
End of explanation
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
Explanation: It turns out that the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy represents overfitting. Overfitting happens when a machine learning model performs worse on new, previously unseen inputs than it does on the training data. An overfitted model "memorizes" the noise and details in the training dataset to a point where it negatively impacts the performance of the model on the new data. For more information, see the following:
* Demonstrate overfitting
* Strategies to prevent overfitting
Make predictions
With the model trained, you can use it to make predictions about some images.
The model's linear outputs, logits. Attach a softmax layer to convert the logits to probabilities, which are easier to interpret.
End of explanation
predictions[0]
Explanation: Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
End of explanation
np.argmax(predictions[0])
Explanation: A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value:
End of explanation
test_labels[0]
Explanation: So, the model is most confident that this image is an ankle boot, or class_names[9]. Examining the test label shows that this classification is correct:
End of explanation
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
Explanation: Graph this to look at the full set of 10 class predictions.
End of explanation
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
Explanation: Verify predictions
With the model trained, you can use it to make predictions about some images.
Let's look at the 0th image, predictions, and prediction array. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label.
End of explanation
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
Explanation: Let's plot several images with their predictions. Note that the model can be wrong even when very confident.
End of explanation
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
Explanation: Use the trained model
Finally, use the trained model to make a prediction about a single image.
End of explanation
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
Explanation: tf.keras models are optimized to make predictions on a batch, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list:
End of explanation
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
Explanation: Now predict the correct label for this image:
End of explanation
np.argmax(predictions_single[0])
Explanation: keras.Model.predict returns a list of lists—one list for each image in the batch of data. Grab the predictions for our (only) image in the batch:
End of explanation |
744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applying classifiers to Shalek2013 and Macaulay2016
We're going to use the classifier knowledge that we've learned so far and apply it to the shalek2013 and macaulay2016 datasets.
For the GO analysis, we'll need a few other packages
Step3: Utility functions for gene ontology and SVM decision boundary plotting
Step4: Read in the Shalek2013 data
Step5: Side note
Step6: Assign the variable lps_response_genes based on the gene ids pulled out from this subset
Step7: For this analysis We want to compare the difference between the "mature" and "immature" cells in the Shalek2013 data.
Step8: Use only the genes that are substantially expressed in single cells
Step9: Now because computers only understand numbers, we'll convert the category label of "mature" and "immature" into integers to a using a LabelEncoder. Let's look at that column again, only for mature cells
Step10: Run the classifier!!
Yay so now we can run a classifier!
Step11: We'll use PCA or ICA to reduce our data for visualizing the SVM decision boundary. Stick to 32 or fewer components because the next steps will die if you use more than 32. Also, this n_components variable will be used later so pay attention
Step12: Let's add the group identifier here for plotting
Step13: And plot our components in
Step14: Now we'll make a dataframe of 20 equally spaced intervals to show the full range of the data
Step15: You'll notice that the top (head()) has the minimum values and the bottom (tail()) has the maximum values
Step16: Just to convince ourselves that this actually shows the range of all values, lets plot the smushed intervals and the smushed data in teh same spot
Step17: Now we'll make a 2-dimensional grid of the whole space so we can plot the decision boundary
Step18: Let's plot the grid so we can see it!
Step19: Convert the smushed area into unsmushed high dimensional space
Step20: Get the surface of the decision function
Step21: Evaluating classifiers through Gene Ontology (GO) Enrichment
Gene ontology is a tree (aka directed acyclic graph or "dag") of gene annotations. The topmost node is the most general, and the bottommost nodes are the most specific. Here is an example GO graph
Perform GO enrichment analysis (GOEA)
GOEA Step 1
Step22: GOEA Step 2
Step23: GOEA Step 3
Step24: GOEA Step 4
Step25: GOEA Step 5
Step26: Exercise 1
Try the same analysis, but use ICA instead of PCA.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Why does the reduction algorithm affect the visualization of the classification?
Could you use MDS or t-SNE for plotting of the classifier boundary? Why or why not?
Try the same analysis, but use the "LPS Response" genes and a dimensionality reduction algorithm of your choice. (... how do you subset only certain columns out of the dataframe?)
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
For (1) and (2) above, also fry using radial basis kernel (kernel="rbf") for SVC.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Decision trees
Step27: Macaulay2016 | Python Code:
# Alphabetical order is standard
# We're doing "import superlongname as abbrev" for our laziness - this way we don't have to type out the whole thing each time.
# From python standard library
import collections
# Python plotting library
import matplotlib.pyplot as plt
# Numerical python library (pronounced "num-pie")
import numpy as np
# Dataframes in Python
import pandas as pd
# Statistical plotting library we'll use
import seaborn as sns
sns.set(style='whitegrid')
# Label processing
from sklearn import preprocessing
# Matrix decomposition
from sklearn.decomposition import PCA, FastICA
# Matrix decomposition
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
# Manifold learning
from sklearn.manifold import MDS, TSNE
# Gene ontology
import goatools
import mygene
# Initialize the "mygene.info" (http://mygene.info/) interface
mg = mygene.MyGeneInfo()
# This is necessary to show the plotted figures inside the notebook -- "inline" with the notebook cells
%matplotlib inline
Explanation: Applying classifiers to Shalek2013 and Macaulay2016
We're going to use the classifier knowledge that we've learned so far and apply it to the shalek2013 and macaulay2016 datasets.
For the GO analysis, we'll need a few other packages:
mygene for looking up the gene ontology categories of genes
goatools for performing gene ontology enrichment analysis
fishers_exact_test for goatools
Use the following commands at your terminal to install the packages. Some of them are on Github so it's important to get the whole command right.
$ pip install mygene
$ pip install git+git://github.com/olgabot/goatools.git
$ pip install git+https://github.com/brentp/fishers_exact_test.git
End of explanation
def plot_svc_decision_function(clf, ax=None):
Plot the decision function for a 2D SVC
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([[xi, yj]])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
GO_KEYS = 'go.BP', 'go.MF', 'go.CC'
def parse_mygene_output(mygene_output):
Convert mygene.querymany output to a gene id to go term mapping (dictionary)
Parameters
----------
mygene_output : dict or list
Dictionary (returnall=True) or list (returnall=False) of
output from mygene.querymany
Output
------
gene_name_to_go : dict
Mapping of gene name to a set of GO ids
# if "returnall=True" was specified, need to get just the "out" key
if isinstance(mygene_output, dict):
mygene_output = mygene_output['out']
gene_name_to_go = collections.defaultdict(set)
for line in mygene_output:
gene_name = line['query']
for go_key in GO_KEYS:
try:
go_terms = line[go_key]
except KeyError:
continue
if isinstance(go_terms, dict):
go_ids = set([go_terms['id']])
else:
go_ids = set(x['id'] for x in go_terms)
gene_name_to_go[gene_name] |= go_ids
return gene_name_to_go
Explanation: Utility functions for gene ontology and SVM decision boundary plotting
End of explanation
metadata = pd.read_csv('../data/shalek2013/metadata.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
expression = pd.read_csv('../data/shalek2013/expression.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
expression_feature = pd.read_csv('../data/shalek2013/expression_feature.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
# creating new column indicating color
metadata['color'] = metadata['maturity'].map(
lambda x: 'MediumTurquoise' if x == 'immature' else 'Teal')
metadata.loc[metadata['pooled'], 'color'] = 'black'
# Create a column indicating both maturity and pooled for coloring with seaborn, e.g. sns.pairplot
metadata['group'] = metadata['maturity']
metadata.loc[metadata['pooled'], 'group'] = 'pooled'
# Create a palette and ordering for using with sns.pairplot
palette = ['MediumTurquoise', 'Teal', 'black']
order = ['immature', 'mature', 'pooled']
metadata
Explanation: Read in the Shalek2013 data
End of explanation
subset = expression_feature.query('gene_category == "LPS Response"')
subset.head()
Explanation: Side note: getting LPS response genes using query
Get the "LPS response genes" using a query:
End of explanation
lps_response_genes = subset.index
lps_response_genes
Explanation: Assign the variable lps_response_genes based on the gene ids pulled out from this subset:
End of explanation
singles_ids = [x for x in expression.index if x.startswith('S')]
singles = expression.loc[singles_ids]
singles.shape
Explanation: For this analysis We want to compare the difference between the "mature" and "immature" cells in the Shalek2013 data.
End of explanation
singles = singles.loc[:, (singles > 1).sum() >= 3]
singles.shape
Explanation: Use only the genes that are substantially expressed in single cells
End of explanation
singles_maturity = metadata.loc[singles.index, 'maturity']
singles_maturity
# Instantiate the encoder
encoder = preprocessing.LabelEncoder()
# Get number of categories and transform "mature"/"immature" to numbers
target = encoder.fit_transform(singles_maturity)
target
Explanation: Now because computers only understand numbers, we'll convert the category label of "mature" and "immature" into integers to a using a LabelEncoder. Let's look at that column again, only for mature cells:
End of explanation
from sklearn.svm import SVC
classifier = SVC(kernel='linear')
classifier.fit(singles, target)
Explanation: Run the classifier!!
Yay so now we can run a classifier!
End of explanation
n_components = 3
smusher = PCA(n_components=n_components)
smushed = pd.DataFrame(smusher.fit_transform(singles), index=singles.index)
print(smushed.shape)
smushed.head()
Explanation: We'll use PCA or ICA to reduce our data for visualizing the SVM decision boundary. Stick to 32 or fewer components because the next steps will die if you use more than 32. Also, this n_components variable will be used later so pay attention :)
End of explanation
smushed_with_group = smushed.join(metadata['group'])
smushed_with_group
Explanation: Let's add the group identifier here for plotting:
End of explanation
sns.pairplot(smushed_with_group, hue='group', palette=palette,
hue_order=order, plot_kws=dict(s=100, edgecolor='white', linewidth=2))
Explanation: And plot our components in
End of explanation
n_intervals = 20
smushed_intervals = pd.DataFrame(smushed).apply(lambda x: pd.Series(np.linspace(x.min(), x.max(), n_intervals)))
print(smushed_intervals.shape)
smushed_intervals.head()
Explanation: Now we'll make a dataframe of 20 equally spaced intervals to show the full range of the data:
End of explanation
smushed_intervals.tail()
Explanation: You'll notice that the top (head()) has the minimum values and the bottom (tail()) has the maximum values:
End of explanation
fig, ax = plt.subplots()
ax.scatter(smushed_intervals[0], smushed_intervals[1], color='pink')
ax.scatter(smushed[0], smushed[1], color=metadata['color'])
Explanation: Just to convince ourselves that this actually shows the range of all values, lets plot the smushed intervals and the smushed data in teh same spot:
End of explanation
low_d_grid = np.meshgrid(*[smushed_intervals[col] for col in smushed_intervals])
print(len(low_d_grid))
print([x.shape for x in low_d_grid])
Explanation: Now we'll make a 2-dimensional grid of the whole space so we can plot the decision boundary
End of explanation
fig, ax = plt.subplots()
ax.scatter(low_d_grid[0], low_d_grid[1], color='pink')
ax.scatter(smushed[0], smushed[1], color=metadata['color'])
new_nrows = n_intervals**n_components
new_ncols = n_components
low_dimensional_vectors = np.concatenate([x.flatten() for x in low_d_grid]).reshape(new_nrows, new_ncols, order='F')
low_dimensional_vectors.shape
Explanation: Let's plot the grid so we can see it!
End of explanation
high_dimensional_vectors = smusher.inverse_transform(low_dimensional_vectors)
high_dimensional_vectors.shape
Explanation: Convert the smushed area into unsmushed high dimensional space
End of explanation
Z = classifier.decision_function(high_dimensional_vectors)
print(Z.shape)
Z = Z.reshape(low_d_grid[0].shape)
Z.shape
Z[1].shape
import itertools
pairgrid = sns.pairplot(smushed_with_group, hue='group')
for i, j in itertools.permutations(range(n_components), 2):
ax = pairgrid.axes[i, j]
Z_smushed = pd.DataFrame(smusher.fit_transform(Z[i, j]))
print(Z_smushed.shape)
Z_smushed.head()
ax.contour(low_d_grid[i], low_d_grid[j], Z_smushed, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
fig, ax = plt.subplots()
ax.scatter(reduced_data[:, 0], reduced_data[:, 1], c=target, cmap='Dark2')
ax.contour(X, Y, Z, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
np.reshape?
Explanation: Get the surface of the decision function
End of explanation
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
# Show the filename
obo_fname
Explanation: Evaluating classifiers through Gene Ontology (GO) Enrichment
Gene ontology is a tree (aka directed acyclic graph or "dag") of gene annotations. The topmost node is the most general, and the bottommost nodes are the most specific. Here is an example GO graph
Perform GO enrichment analysis (GOEA)
GOEA Step 1: Download GO graph file of "obo" type (same for all species)
This will download the file "go-basic.obo" if it doesn't already exist. This only needs to be done once.
End of explanation
obo_dag = goatools.obo_parser.GODag(obo_file='go-basic.obo')
Explanation: GOEA Step 2: Create the GO graph (same for all species)
End of explanation
mygene_output = mg.querymany(expression.columns,
scopes='symbol', fields=GO_KEYS, species='mouse',
returnall=True)
gene_name_to_go = parse_mygene_output(mygene_output)
Explanation: GOEA Step 3: Get gene ID to GO id mapping (species-specific and experiment-specific)
Here we are establishing the background for our GOEA. Defining your background is very important because, for example, tehre are lots of neural genes so if you use all human genes as background in your study of which genes are upregulated in Neuron Type X vs Neuron Type Y, you'll get a bunch of neuron genes (which is true) but not the smaller differences between X and Y. Typicall, you use all expressed genes as the background.
For our data, we can access all expressed genes very simply by getting the column names in the dataframe: expression.columns.
End of explanation
go_enricher = goatools.GOEnrichmentStudy(expression.columns, gene_name_to_go, obo_dag)
Explanation: GOEA Step 4: Create a GO enrichment calculator object go_enricher (species- and experiment-specific)
In this step, we are using the two objects we've created (obo_dag from Step 2 and gene_name_to_go from Step 3) plus the gene ids to create a go_enricher object
End of explanation
genes_of_interest =
results = go.run_study(genes[:5])
go_enrichment = pd.DataFrame([r.__dict__ for r in results])
go_enrichment.head()
import pandas.util.testing as pdt
pdt.assert_numpy_array_equal(two_d_space_v1, two_d_space_v2)
two_d_space.shape
plt.scatter(two_d_space[:, 0], two_d_space[:, 1], color='black')
expression.index[:10]
clf = ExtraTreesClassifier(n_estimators=100000, n_jobs=-1, verbose=1)
expression.index.duplicated()
expression.drop_duplicates()
# assoc = pd.read_table('danio-rerio-gene-ontology.txt').dropna()
# assoc_df = assoc.groupby('Ensembl Gene ID').agg(lambda s: ';'.join(s))
# assoc_s = assoc_df['GO Term Accession'].apply(lambda s: set(s.split(';')))
# assoc_dict = assoc_s.to_dict()
import goatools
# cl = gene_annotation.sort(col, ascending=False)[gene_annotation[col] > 5e-4].index
g = goatools.GOEnrichmentStudy(list(gene_annotation.index), assoc_dict, obo_dag, study=list(cl))
for r in g.results[:25]:
print r.goterm.id, '{:.2}'.format(r.p_bonferroni), r.ratio_in_study, r.goterm.name, r.goterm.namespace
unsmushed = smusher.inverse_transform(two_d_space)
Z = classifier.decision_function(unsmushed)
Z = Z.reshape(xx.shape)
fig, ax = plt.subplots()
ax.scatter(reduced_data[:, 0], reduced_data[:, 1], c=target, cmap='Dark2')
ax.contour(X, Y, Z, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
Explanation: GOEA Step 5: Calculate go enrichment!!! (species- and experiment-specific)
Now we are ready to run go enrichment!! Let's take our enriched genes of interest and
End of explanation
def visualize_tree(estimator, X, y, smusher, boundaries=True,
xlim=None, ylim=None):
estimator.fit(X, y)
smushed = smusher.fit_transform(X)
if xlim is None:
xlim = (smushed[:, 0].min() - 0.1, smushed[:, 0].max() + 0.1)
if ylim is None:
ylim = (smushed[:, 1].min() - 0.1, smushed[:, 1].max() + 0.1)
x_min, x_max = xlim
y_min, y_max = ylim
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
two_d_space = np.c_[xx.ravel(), yy.ravel()]
unsmushed = smusher.inverse_transform(two_d_space)
Z = estimator.predict(unsmushed)
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, alpha=0.2, cmap='Paired')
plt.clim(y.min(), y.max())
# Plot also the training points
plt.scatter(smushed[:, 0], smushed[:, 1], c=y, s=50, cmap='Paired')
plt.axis('off')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.clim(y.min(), y.max())
# Plot the decision boundaries
def plot_boundaries(i, xlim, ylim):
if i < 0:
return
tree = estimator.tree_
if tree.feature[i] == 0:
plt.plot([tree.threshold[i], tree.threshold[i]], ylim, '-k')
plot_boundaries(tree.children_left[i],
[xlim[0], tree.threshold[i]], ylim)
plot_boundaries(tree.children_right[i],
[tree.threshold[i], xlim[1]], ylim)
elif tree.feature[i] == 1:
plt.plot(xlim, [tree.threshold[i], tree.threshold[i]], '-k')
plot_boundaries(tree.children_left[i], xlim,
[ylim[0], tree.threshold[i]])
plot_boundaries(tree.children_right[i], xlim,
[tree.threshold[i], ylim[1]])
if boundaries:
plot_boundaries(0, plt.xlim(), plt.ylim())
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
from sklearn.decomposition import PCA, FastICA
from sklearn.manifold import TSNE, MDS
smusher = PCA(n_components=2)
# reduced_data = smusher.fit_transform(singles+1)
visualize_tree(classifier, singles, np.array(target), smusher)
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
classifier = RandomForestClassifier()
smusher = PCA(n_components=2)
# reduced_data = smusher.fit_transform(singles+1)
visualize_tree(classifier, singles, np.array(target), smusher, boundaries=False)
classifier = ExtraTreesClassifier()
smusher = PCA(n_components=2)
# reduced_data = smusher.fit_transform(singles+1)
visualize_tree(classifier, singles, np.array(target), smusher, boundaries=False)
Explanation: Exercise 1
Try the same analysis, but use ICA instead of PCA.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Why does the reduction algorithm affect the visualization of the classification?
Could you use MDS or t-SNE for plotting of the classifier boundary? Why or why not?
Try the same analysis, but use the "LPS Response" genes and a dimensionality reduction algorithm of your choice. (... how do you subset only certain columns out of the dataframe?)
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
For (1) and (2) above, also fry using radial basis kernel (kernel="rbf") for SVC.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Decision trees
End of explanation
pd.options.display.max_columns = 50
macaulay2016_metadata = pd.read_csv('../4._Case_Study/macaulay2016/sample_info_qc.csv', index_col=0)
macaulay2016_metadata.head()
macaulay2016_cluster_names = tuple(sorted(macaulay2016_metadata['cluster'].unique()))
macaulay2016_cluster_names
macaulay2016_target = macaulay2016_metadata['cluster'].map(lambda x: macaulay2016_cluster_names.index(x))
macaulay2016_target
macaulay2016_expression = pd.read_csv('../4._Case_Study/macaulay2016/gene_expression_s.csv', index_col=0).T
macaulay2016_expression.head()
macaulay2016_expression_filtered = macaulay2016_expression[[x for x in macaulay2016_expression if x.startswith("ENS")]]
macaulay2016_expression_filtered.shape
macaulay2016_expression_filtered = macaulay2016_expression_filtered.loc[macaulay2016_metadata.index]
macaulay2016_expression_filtered = 1e6*macaulay2016_expression_filtered.divide(macaulay2016_expression_filtered.sum(axis=1), axis=0)
macaulay2016_expression_filtered.head()
macaulay2016_expression_filtered = np.log10(macaulay2016_expression_filtered+1)
macaulay2016_expression_filtered.head()
macaulay2016_expression_filtered = macaulay2016_expression_filtered.loc[:, (macaulay2016_expression_filtered > 1).sum() >=3]
macaulay2016_expression_filtered.shape
# classifier = SVC(kernel='linear')
# classifier = DecisionTreeClassifier(max_depth=10)
classifier = ExtraTreesClassifier(n_estimators=1000)
classifier.fit(macaulay2016_expression_filtered, macaulay2016_target)
smusher = FastICA(n_components=2, random_state=0)
smushed_data = smusher.fit_transform(macaulay2016_expression_filtered)
x_min, x_max = smushed_data[:, 0].min(), smushed_data[:, 0].max()
y_min, y_max = smushed_data[:, 1].min(), smushed_data[:, 1].max()
delta_x = 0.05 * abs(x_max - x_min)
delta_y = 0.05 * abs(x_max - x_min)
x_min -= delta_x
x_max += delta_x
y_min -= delta_y
y_max += delta_y
X = np.linspace(x_min, x_max, 100)
Y = np.linspace(y_min, y_max, 100)
xx, yy = np.meshgrid(X, Y)
two_d_space = np.c_[xx.ravel(), yy.ravel()]
two_d_space
high_dimensional_space = smusher.inverse_transform(two_d_space)
# Get the class boundaries
Z = classifier.predict(high_dimensional_space)
import matplotlib as mpl
macaulay2016_metadata['cluster_color_hex'] = macaulay2016_metadata['cluster_color'].map(lambda x: mpl.colors.rgb2hex(eval(x)))
int_to_cluster_name = dict(zip(range(len(macaulay2016_cluster_names)), macaulay2016_cluster_names))
int_to_cluster_name
cluster_name_to_color = dict(zip(macaulay2016_metadata['cluster'], macaulay2016_metadata['cluster_color_hex']))
cluster_name_to_color
macaulay2016_palette = [mpl.colors.hex2color(cluster_name_to_color[int_to_cluster_name[i]])
for i in range(len(macaulay2016_cluster_names))]
macaulay2016_palette
cmap = mpl.colors.ListedColormap(macaulay2016_palette)
cmap
x_min, x_max
y = macaulay2016_target
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, alpha=0.2, cmap=cmap)
plt.clim(y.min(), y.max())
# Plot also the training points
plt.scatter(smushed_data[:, 0], smushed_data[:, 1], s=50, color=macaulay2016_metadata['cluster_color_hex'],
edgecolor='k') #c=macaulay2016_target, s=50, cmap='Set2')
plt.axis('off')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.clim(y.min(), y.max())
smusher = FastICA(n_components=4, random_state=354)
smushed_data = pd.DataFrame(smusher.fit_transform(macaulay2016_expression_filtered))
# x_min, x_max = smushed_data[:, 0].min(), smushed_data[:, 0].max()
# y_min, y_max = smushed_data[:, 1].min(), smushed_data[:, 1].max()
# delta_x = 0.05 * abs(x_max - x_min)
# delta_y = 0.05 * abs(x_max - x_min)
# x_min -= delta_x
# x_max += delta_x
# y_min -= delta_y
# y_max += delta_y
# X = np.linspace(x_min, x_max, 100)
# Y = np.linspace(y_min, y_max, 100)
# xx, yy = np.meshgrid(X, Y)
# low_dimensional_space = np.c_[xx.ravel(), yy.ravel()]
# low_dimensional_space
smushed_data.max() - smushed_data.min()
grid = smushed_data.apply(lambda x: pd.Series(np.linspace(x.min(), x.max(), 50)))
grid.head()
# grid = [x.ravel() for x in grid]
# grid
# low_dimensional_space = np.concatenate(grid, axis=0)
# low_dimensional_space.shape
# # low_dimensional_space = low_dimensional_space.reshape(shape)
x1, x2, x3, x4 = np.meshgrid(*[grid[col] for col in grid])
low_dimensional_space = np.c_[x1.ravel(), x2.ravel(), x3.ravel(), x4.ravel()]
high_dimensional_space = smusher.inverse_transform(low_dimensional_space)
smushed_data['hue'] = macau
sns.pairplot(smushed_data)
Explanation: Macaulay2016
End of explanation |
745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing test-statistics
Step1: First, data following a (zero-inflated) negative binomial (ZINB) distribution is created for testing purposes. Test size and distribution parameters can be specified.
For a specified number of marker genes in a cluster, distribution of these genes follows a different ZINB distribution. We use the following notation
Step2: Create data.
Both sample names and variable names are simply intgers starting from 0.
Step3: Cluster according to true grouping
Step4: Testing
Case 1
Step5: As can be seen above, not on only does the wilcoxon-rank-sum test detect all marker genes, but there is also a clear difference to all other genes in ranking.
Case 2
Step6: This parameter initialization leads to the following expectations/ variances
Step7: With smaller difference in variance, still all marker genes are detected, but less clearly.
Case 3
Step8: This parameter initialization leads to the following expectations/ variances | Python Code:
import numpy as np
import scanpy.api as sc
from anndata import AnnData
from numpy.random import negative_binomial, binomial, seed
Explanation: Comparing test-statistics: T-Test and Wilcoxon rank sum test for generic Zero-Inflated Negative Binomial Distribution
End of explanation
seed(1234)
# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes
n_simulated_cells=1000
n_simulated_genes=100
n_cluster=100
n_marker_genes=10
# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r
# Differential gene expression is simulated using reference parameters for all cells/genes
# except for marker genes in the distinct cells.
reference_zero_inflation=0.15
reference_p=0.25
reference_n=2
cluster_zero_inflation=0.9
cluster_p=0.5
cluster_n=1
Explanation: First, data following a (zero-inflated) negative binomial (ZINB) distribution is created for testing purposes. Test size and distribution parameters can be specified.
For a specified number of marker genes in a cluster, distribution of these genes follows a different ZINB distribution. We use the following notation:
$z_r=\text{zero-inflation reference group}$
$z_c=\text{zero-inflation cluster}$
$p_r=\text{success probability reference group}$
$p_c=\text{success probability cluster}$
$n_r=\text{number of successfull draws till stop reference group}$
$n_c=\text{number of successfull draws till stop cluster}$
Let $X_r\sim NegBin(p_r,n_r)$, $Y_r\sim Ber(z_r)$ independent of $X$, then $Z_r=YX\sim ZINB(z_r,p_r,n_r)$ describes the distribution for the all cells/genes except for marker genes in a specified number of clustered cells, which are described using a $ZINB(z_c,p_c,n_c)$ distribution.
Especially, we have
$$\mathbb{E}[Z_r]=z_rn_r\frac{1-p_r}{p_r}$$
and using standard calculations for expectations and variance,
$$\mathbb{V}[Z_r]=z_r*n_r\frac{1-p_r}{p_r²}+z_r(1-z_r)(n_r\frac{1-p_r}{p_r})²$$
This form of the ZINB was taken from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1293115 (Greene, 1994)
Tune parameters and create data
In order to demonstrate the superiority of the wilcoxon rank test in certain cases, parameter specifications have to be found that violate the t-test assumptions and therefore make it difficult to detect marker genes. In short: Expectations should be the same, but variance should be different, for the simple reason that expectation differences will be - due to the law of large numbers - detected by the t-test as well, even though the lack of normal distribution means that this may take some time.
The effect should increase with the magnitude of variance difference, as demonstrated below.
In order for the t-test to fail, little to no difference in mean should occur. This can be achieved by tuning the parameters using the formula for expectation specified above.
End of explanation
adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),
negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))
# adapt marker_genes for cluster
adata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),
negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))
Explanation: Create data.
Both sample names and variable names are simply intgers starting from 0.
End of explanation
import pandas as pd
smp='true_groups'
true_groups_int=np.ones((n_simulated_cells,))
true_groups_int[0:n_cluster]=0
true_groups=list()
for i,j in enumerate(true_groups_int):
true_groups.append(str(j))
adata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')
adata.uns[smp + '_order']=np.asarray(['0','1'])
Explanation: Cluster according to true grouping:
The following code includes the true grouping such that it can be accessed by normal function calling of
sc.tl.rank_genes_groups(adata,'true_groups')
or, respectively,
sc.tl.rank_genes_groups(adata,'true_groups', test_type='wilcoxon')
End of explanation
sc.tl.rank_genes_groups(adata, 'true_groups')
sc.pl.rank_genes_groups(adata, n_genes=20)
sc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20)
Explanation: Testing
Case 1: No mean difference, large variance difference.
Using the data created above, we get the following expectation and variance
$\mathbb{E}[Z_r]=\mathbb{E}[Z_c]=0.9$
$\mathbb{V}[Z_r]=8.19$
$\mathbb{V}[Z_c]=1.89$
End of explanation
# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes
n_simulated_cells=1000
n_simulated_genes=100
n_cluster=100
n_marker_genes=10
# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r
# Differential gene expression is simulated using reference parameters for all cells/genes
# except for marker genes in the distinct cells.
reference_zero_inflation=0.15
reference_p=0.5
reference_n=6
cluster_zero_inflation=0.9
cluster_p=0.5
cluster_n=1
Explanation: As can be seen above, not on only does the wilcoxon-rank-sum test detect all marker genes, but there is also a clear difference to all other genes in ranking.
Case 2: No mean difference, smaller variance difference
End of explanation
adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),
negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))
# adapt marker_genes for cluster
adata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),
negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))
import pandas as pd
smp='true_groups'
true_groups_int=np.ones((n_simulated_cells,))
true_groups_int[0:n_cluster]=0
true_groups=list()
for i,j in enumerate(true_groups_int):
true_groups.append(str(j))
adata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')
adata.uns[smp + '_order']=np.asarray(['0','1'])
sc.tl.rank_genes_groups(adata, 'true_groups')
sc.pl.rank_genes_groups(adata, n_genes=20)
sc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20)
Explanation: This parameter initialization leads to the following expectations/ variances:
$\mathbb{E}[Z_r]=\mathbb{E}[Z_c]=0.9$
$\mathbb{V}[Z_r]=6.39$
$\mathbb{V}[Z_c]=1.89$
End of explanation
# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes
n_simulated_cells=1000
n_simulated_genes=100
n_cluster=100
n_marker_genes=10
# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r
# Differential gene expression is simulated using reference parameters for all cells/genes
# except for marker genes in the distinct cells.
reference_zero_inflation=0.15
reference_p=0.5
reference_n=6
cluster_zero_inflation=0.9
cluster_p=0.55
cluster_n=2
Explanation: With smaller difference in variance, still all marker genes are detected, but less clearly.
Case 3: Small difference in expectation, difference in variance
End of explanation
adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),
negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))
# adapt marker_genes for cluster
adata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),
negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))
smp='true_groups'
true_groups_int=np.ones((n_simulated_cells,))
true_groups_int[0:n_cluster]=0
true_groups=list()
for i,j in enumerate(true_groups_int):
true_groups.append(str(j))
adata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')
adata.uns[smp + '_order']=np.asarray(['0','1'])
sc.tl.rank_genes_groups(adata, 'true_groups')
sc.pl.rank_genes_groups(adata, n_genes=20)
sc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20)
Explanation: This parameter initialization leads to the following expectations/ variances:
$\mathbb{E}[Z_r]=0.9$
$\mathbb{E}[Z_c]=1.47$
$\mathbb{V}[Z_r]=6.39$
$\mathbb{V}[Z_c]=2.92$
End of explanation |
746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle's Predicting Red Hat Business Value
This is a follow up attempt at Kaggle's Predicting Red Hat Business Value competition.
See my notebooks section for links to the first attempt and other kaggle competitions.
The focus of this iteration is exploring whether we can bring back the previously ignored categorical columns that have hundreds if not thousands of unique values, making it impractical to use one-hot encoding.
Two approaches are taken on categorical variables with a large amount of unique values
Step1: Joining together to get dataset
Step11: Building a preprocessing pipeline
Notice the new OmniEncoder transformer and read more about its development in my learning log.
Step12: Potential trouble with high dimensionality
Notice that char_10_action, group_1 and others have a ton of unique values; one-hot encoding will result in a dataframe with thousands of columns.
Let's explore 3 approaches to dealing with categorical columns with a lot of unique values and compare performance
Step13: Sampling to reduce runtime in training large dataset
If we train models based on the entire test dataset provided it exhausts the memory on my laptop. Again, in the spirit of getting something quick and dirty working, we'll sample the dataset and train on that. We'll then evaluate our model by testing the accuracy on a larger sample.
Step14: Reporting utilities
Some utilities to make reporting progress easier
Step15: Putting together classifiers
Step16: Cross validation and full test set accuracy
We'll cross validate within the training set, and then train on the full training set and see how well it performs on the full test set. | Python Code:
import pandas as pd
people = pd.read_csv('people.csv.zip')
people.head(3)
actions = pd.read_csv('act_train.csv.zip')
actions.head(3)
Explanation: Kaggle's Predicting Red Hat Business Value
This is a follow up attempt at Kaggle's Predicting Red Hat Business Value competition.
See my notebooks section for links to the first attempt and other kaggle competitions.
The focus of this iteration is exploring whether we can bring back the previously ignored categorical columns that have hundreds if not thousands of unique values, making it impractical to use one-hot encoding.
Two approaches are taken on categorical variables with a large amount of unique values:
encoding the values ordinally; sorting the values lexicographically and assigning a sequence of numbers, and then treating them quantitatively from there
encoding the most frequently occuring values using one-hot and then binary encoding the rest. As part of this I developed a new scikit-learn transformer
The end results: reincluding the columns boosted performance on the training set by only 0.5%, and surprisingly the binary / one-hot combo did hardly any better than the ordinal encoding.
Loading in the data
End of explanation
training_data_full = pd.merge(actions, people, how='inner', on='people_id', suffixes=['_action', '_person'], sort=False)
training_data_full.head(5)
(actions.shape, people.shape, training_data_full.shape)
Explanation: Joining together to get dataset
End of explanation
# %load "preprocessing_transforms.py"
from sklearn.base import TransformerMixin, BaseEstimator
import pandas as pd
import heapq
import numpy as np
class BaseTransformer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None, **fit_params):
return self
def transform(self, X, **transform_params):
return self
class ColumnSelector(BaseTransformer):
Selects columns from Pandas Dataframe
def __init__(self, columns, c_type=None):
self.columns = columns
self.c_type = c_type
def transform(self, X, **transform_params):
cs = X[self.columns]
if self.c_type is None:
return cs
else:
return cs.astype(self.c_type)
class OmniEncoder(BaseTransformer):
Encodes a categorical variable using no more than k columns. As many values as possible
are one-hot encoded, the remaining are fit within a binary encoded set of columns.
If necessary some are dropped (e.g if (#unique_values) > 2^k).
In deciding which values to one-hot encode, those that appear more frequently are
preferred.
def __init__(self, max_cols=20):
self.column_infos = {}
self.max_cols = max_cols
if max_cols < 3 or max_cols > 100:
raise ValueError("max_cols {} not within range(3, 100)".format(max_cols))
def fit(self, X, y=None, **fit_params):
self.column_infos = {col: self._column_info(X[col], self.max_cols) for col in X.columns}
return self
def transform(self, X, **transform_params):
return pd.concat(
[self._encode_column(X[col], self.max_cols, *self.column_infos[col]) for col in X.columns],
axis=1
)
@staticmethod
def _encode_column(col, max_cols, one_hot_vals, binary_encoded_vals):
num_one_hot = len(one_hot_vals)
num_bits = max_cols - num_one_hot if len(binary_encoded_vals) > 0 else 0
# http://stackoverflow.com/a/29091970/231589
zero_base = ord('0')
def i_to_bit_array(i):
return np.fromstring(
np.binary_repr(i, width=num_bits),
'u1'
) - zero_base
binary_val_to_bit_array = {val: i_to_bit_array(idx + 1) for idx, val in enumerate(binary_encoded_vals)}
bit_cols = [np.binary_repr(2 ** i, width=num_bits) for i in reversed(range(num_bits))]
col_names = ["{}_{}".format(col.name, val) for val in one_hot_vals] + ["{}_{}".format(col.name, bit_col) for bit_col in bit_cols]
zero_bits = np.zeros(num_bits, dtype=np.int)
def splat(v):
v_one_hot = [1 if v == ohv else 0 for ohv in one_hot_vals]
v_bits = binary_val_to_bit_array.get(v, zero_bits)
return pd.Series(np.concatenate([v_one_hot, v_bits]))
df = col.apply(splat)
df.columns = col_names
return df
@staticmethod
def _column_info(col, max_cols):
:param col: pd.Series
:return: {'val': 44, 'val2': 4, ...}
val_counts = dict(col.value_counts())
num_one_hot = OmniEncoder._num_onehot(len(val_counts), max_cols)
return OmniEncoder._partition_one_hot(val_counts, num_one_hot)
@staticmethod
def _partition_one_hot(val_counts, num_one_hot):
Paritions the values in val counts into a list of values that should be
one-hot encoded and a list of values that should be binary encoded.
The `num_one_hot` most popular values are chosen to be one-hot encoded.
:param val_counts: {'val': 433}
:param num_one_hot: the number of elements to be one-hot encoded
:return: ['val1', 'val2'], ['val55', 'val59']
one_hot_vals = [k for (k, count) in heapq.nlargest(num_one_hot, val_counts.items(), key=lambda t: t[1])]
one_hot_vals_lookup = set(one_hot_vals)
bin_encoded_vals = [val for val in val_counts if val not in one_hot_vals_lookup]
return sorted(one_hot_vals), sorted(bin_encoded_vals)
@staticmethod
def _num_onehot(n, k):
Determines the number of onehot columns we can have to encode n values
in no more than k columns, assuming we will binary encode the rest.
:param n: The number of unique values to encode
:param k: The maximum number of columns we have
:return: The number of one-hot columns to use
num_one_hot = min(n, k)
def num_bin_vals(num):
if num == 0:
return 0
return 2 ** num - 1
def capacity(oh):
Capacity given we are using `oh` one hot columns.
return oh + num_bin_vals(k - oh)
while capacity(num_one_hot) < n and num_one_hot > 0:
num_one_hot -= 1
return num_one_hot
class EncodeCategorical(BaseTransformer):
def __init__(self):
self.categorical_vals = {}
def fit(self, X, y=None, **fit_params):
self.categorical_vals = {col: {label: idx + 1 for idx, label in enumerate(sorted(X[col].dropna().unique()))} for
col in X.columns}
return self
def transform(self, X, **transform_params):
return pd.concat(
[X[col].map(self.categorical_vals[col]) for col in X.columns],
axis=1
)
class SpreadBinary(BaseTransformer):
def transform(self, X, **transform_params):
return X.applymap(lambda x: 1 if x == 1 else -1)
class DfTransformerAdapter(BaseTransformer):
Adapts a scikit-learn Transformer to return a pandas DataFrame
def __init__(self, transformer):
self.transformer = transformer
def fit(self, X, y=None, **fit_params):
self.transformer.fit(X, y=y, **fit_params)
return self
def transform(self, X, **transform_params):
raw_result = self.transformer.transform(X, **transform_params)
return pd.DataFrame(raw_result, columns=X.columns, index=X.index)
class DfOneHot(BaseTransformer):
Wraps helper method `get_dummies` making sure all columns get one-hot encoded.
def __init__(self):
self.dummy_columns = []
def fit(self, X, y=None, **fit_params):
self.dummy_columns = pd.get_dummies(
X,
prefix=[c for c in X.columns],
columns=X.columns).columns
return self
def transform(self, X, **transform_params):
return pd.get_dummies(
X,
prefix=[c for c in X.columns],
columns=X.columns).reindex(columns=self.dummy_columns, fill_value=0)
class DfFeatureUnion(BaseTransformer):
A dataframe friendly implementation of `FeatureUnion`
def __init__(self, transformers):
self.transformers = transformers
def fit(self, X, y=None, **fit_params):
for l, t in self.transformers:
t.fit(X, y=y, **fit_params)
return self
def transform(self, X, **transform_params):
transform_results = [t.transform(X, **transform_params) for l, t in self.transformers]
return pd.concat(transform_results, axis=1)
for col in training_data_full.columns:
print("in {} there are {} unique values".format(col, len(training_data_full[col].unique())))
None
Explanation: Building a preprocessing pipeline
Notice the new OmniEncoder transformer and read more about its development in my learning log.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer, StandardScaler
cat_columns = ['activity_category',
'char_1_action', 'char_2_action', 'char_3_action', 'char_4_action',
'char_5_action', 'char_6_action', 'char_7_action', 'char_8_action',
'char_9_action', 'char_1_person',
'char_2_person', 'char_3_person',
'char_4_person', 'char_5_person', 'char_6_person', 'char_7_person',
'char_8_person', 'char_9_person', 'char_10_person', 'char_11',
'char_12', 'char_13', 'char_14', 'char_15', 'char_16', 'char_17',
'char_18', 'char_19', 'char_20', 'char_21', 'char_22', 'char_23',
'char_24', 'char_25', 'char_26', 'char_27', 'char_28', 'char_29',
'char_30', 'char_31', 'char_32', 'char_33', 'char_34', 'char_35',
'char_36', 'char_37']
high_dim_cat_columns = ['date_action', 'char_10_action', 'group_1', 'date_person']
q_columns = ['char_38']
preprocessor_ignore = Pipeline([
('features', DfFeatureUnion([
('quantitative', Pipeline([
('select-quantitative', ColumnSelector(q_columns, c_type='float')),
('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),
('scale', DfTransformerAdapter(StandardScaler()))
])),
('categorical', Pipeline([
('select-categorical', ColumnSelector(cat_columns)),
('apply-onehot', DfOneHot()),
('spread-binary', SpreadBinary())
])),
]))
])
preprocessor_lexico = Pipeline([
('features', DfFeatureUnion([
('quantitative', Pipeline([
('combine-q', DfFeatureUnion([
('highd', Pipeline([
('select-highd', ColumnSelector(high_dim_cat_columns)),
('encode-highd', EncodeCategorical())
])),
('select-quantitative', ColumnSelector(q_columns, c_type='float')),
])),
('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),
('scale', DfTransformerAdapter(StandardScaler()))
])),
('categorical', Pipeline([
('select-categorical', ColumnSelector(cat_columns)),
('apply-onehot', DfOneHot()),
('spread-binary', SpreadBinary())
])),
]))
])
preprocessor_omni_20 = Pipeline([
('features', DfFeatureUnion([
('quantitative', Pipeline([
('select-quantitative', ColumnSelector(q_columns, c_type='float')),
('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),
('scale', DfTransformerAdapter(StandardScaler()))
])),
('categorical', Pipeline([
('select-categorical', ColumnSelector(cat_columns + high_dim_cat_columns)),
('apply-onehot', OmniEncoder(max_cols=20)),
('spread-binary', SpreadBinary())
])),
]))
])
preprocessor_omni_50 = Pipeline([
('features', DfFeatureUnion([
('quantitative', Pipeline([
('select-quantitative', ColumnSelector(q_columns, c_type='float')),
('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),
('scale', DfTransformerAdapter(StandardScaler()))
])),
('categorical', Pipeline([
('select-categorical', ColumnSelector(cat_columns + high_dim_cat_columns)),
('apply-onehot', OmniEncoder(max_cols=50)),
('spread-binary', SpreadBinary())
])),
]))
])
Explanation: Potential trouble with high dimensionality
Notice that char_10_action, group_1 and others have a ton of unique values; one-hot encoding will result in a dataframe with thousands of columns.
Let's explore 3 approaches to dealing with categorical columns with a lot of unique values and compare performance:
ignore them
encode them ordinally, mapping every unique value to a different integer (assuming some ordered value that probably doesn't exist, at least not by our default lexicographical sorting)
encode them with a combo of one-hot and binary
End of explanation
from sklearn.cross_validation import train_test_split
training_frac = 0.01
test_frac = 0.05
training_data, the_rest = train_test_split(training_data_full, train_size=training_frac, random_state=0)
test_data = the_rest.sample(frac=test_frac / (1-training_frac))
training_data.shape
test_data.shape
Explanation: Sampling to reduce runtime in training large dataset
If we train models based on the entire test dataset provided it exhausts the memory on my laptop. Again, in the spirit of getting something quick and dirty working, we'll sample the dataset and train on that. We'll then evaluate our model by testing the accuracy on a larger sample.
End of explanation
import time
import subprocess
class time_and_log():
def __init__(self, label, *, prefix='', say=False):
self.label = label
self.prefix = prefix
self.say = say
def __enter__(self):
msg = 'Starting {}'.format(self.label)
print('{}{}'.format(self.prefix, msg))
if self.say:
cmd_say(msg)
self.start = time.process_time()
return self
def __exit__(self, *exc):
self.interval = time.process_time() - self.start
msg = 'Finished {} in {:.2f} seconds'.format(self.label, self.interval)
print('{}{}'.format(self.prefix, msg))
if self.say:
cmd_say(msg)
return False
def cmd_say(msg):
subprocess.call("say '{}'".format(msg), shell=True)
with time_and_log('wrangling training data', say=True, prefix=" _"):
wrangled = preprocessor_omni_20.fit_transform(training_data)
wrangled.head()
Explanation: Reporting utilities
Some utilities to make reporting progress easier
End of explanation
from sklearn.ensemble import RandomForestClassifier
pipe_rf_ignore = Pipeline([
('wrangle', preprocessor_ignore),
('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))
])
pipe_rf_lexico = Pipeline([
('wrangle', preprocessor_lexico),
('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))
])
pipe_rf_omni_20 = Pipeline([
('wrangle', preprocessor_omni_20),
('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))
])
pipe_rf_omni_50 = Pipeline([
('wrangle', preprocessor_omni_50),
('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))
])
feature_columns = cat_columns + q_columns + high_dim_cat_columns
def extract_X_y(df):
return df[feature_columns], df['outcome']
X_train, y_train = extract_X_y(training_data)
X_test, y_test = extract_X_y(test_data)
Explanation: Putting together classifiers
End of explanation
from sklearn.metrics import accuracy_score
from sklearn.cross_validation import cross_val_score
import numpy as np
models = [
('random forest ignore', pipe_rf_ignore),
('random forest ordinal', pipe_rf_lexico),
('random forest omni 20', pipe_rf_omni_20),
('random forest omni 50', pipe_rf_omni_50),
]
for label, model in models:
print('Evaluating {}'.format(label))
cmd_say('Evaluating {}'.format(label))
# with time_and_log('cross validating', say=True, prefix=" _"):
# scores = cross_val_score(estimator=model,
# X=X_train,
# y=y_train,
# cv=5,
# n_jobs=1)
# print(' CV accuracy: {:.3f} +/- {:.3f}'.format(np.mean(scores), np.std(scores)))
with time_and_log('fitting full training set', say=True, prefix=" _"):
model.fit(X_train, y_train)
with time_and_log('evaluating on full test set', say=True, prefix=" _"):
print(" Full test accuracy ({:.2f} of dataset): {:.3f}".format(
test_frac,
accuracy_score(y_test, model.predict(X_test))))
Explanation: Cross validation and full test set accuracy
We'll cross validate within the training set, and then train on the full training set and see how well it performs on the full test set.
End of explanation |
747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ScrapyDo Overview
ScrapyDo is a crochet-based blocking API for Scrapy. It allows the usage of Scrapy as a library, mainly aimed to be used in spiders prototyping and data exploration in IPython notebooks.
In this notebook we are going to show how to use scrapydo and how it helps to rapidly crawl and explore data. Our main premise is that we want to crawl the internet as a mean to analysis data and not as an end.
Initialization
The function setup must be called before any call to other functions.
Step1: The fetch function and highlight helper
The fetch function returns a scrapy.Response object for a given URL.
Step2: The highlight function is a helper to highlight text content using the pygments module. It is very useful to inspect text content.
Step3: The crawl function or how to do spider-less crawling
Here we are going to show to crawl an URL without defining a spider class and only using callback functions. This is very useful for quick crawling and data exploration.
Step4: We replicate the example in scrapy.org, by defining two callbacks functions to crawl the website http
Step5: Once we have our callback functions for our target website, we simply call to scrapydo.crawl
Step6: Now that we have our data, we can start doing the fun part! Here we show the posts title length distribution.
Step7: The run_spider function and running spiders from an existing project
The previous section showed how to do quick crawls to retrieve data. In this section we are going to show how to run spiders from existing scrapy projects, which can be useful for rapid spider prototyping as well as analysing the crawled data from a given spider.
We use a modified dirbot project, which is already accesible through the PYTHONPATH.
Step8: We want to see the logging output, just as the scrapy crawl command would do. Hence we set the log level to INFO.
Step9: The function run_spider allows to run any spider class and provide custom settings.
Step10: In this way, we have less friction to use scrapy to data mine the web and quickly start exploring our data. | Python Code:
import scrapydo
scrapydo.setup()
Explanation: ScrapyDo Overview
ScrapyDo is a crochet-based blocking API for Scrapy. It allows the usage of Scrapy as a library, mainly aimed to be used in spiders prototyping and data exploration in IPython notebooks.
In this notebook we are going to show how to use scrapydo and how it helps to rapidly crawl and explore data. Our main premise is that we want to crawl the internet as a mean to analysis data and not as an end.
Initialization
The function setup must be called before any call to other functions.
End of explanation
response = scrapydo.fetch("http://httpbin.org/get?show_env=1")
response
Explanation: The fetch function and highlight helper
The fetch function returns a scrapy.Response object for a given URL.
End of explanation
from scrapydo.utils import highlight
highlight(response.body, 'json')
response = scrapydo.fetch("http://httpbin.org")
highlight(response.body[:300])
highlight(response.css('p').extract())
highlight(response.headers, 'python')
Explanation: The highlight function is a helper to highlight text content using the pygments module. It is very useful to inspect text content.
End of explanation
# Some additional imports for our data exploration.
%matplotlib inline
import matplotlib.pylab as plt
import pandas as pd
import seaborn as sns
sns.set(context='poster', style='ticks')
Explanation: The crawl function or how to do spider-less crawling
Here we are going to show to crawl an URL without defining a spider class and only using callback functions. This is very useful for quick crawling and data exploration.
End of explanation
import scrapy
def parse_blog(response):
for url in response.css('ul li a::attr("href")').re(r'/\d\d\d\d/\d\d/$'):
yield scrapy.Request(response.urljoin(url), parse_titles)
def parse_titles(response):
for post_title in response.css('div.entries > ul > li a::text').extract():
yield {'title': post_title}
Explanation: We replicate the example in scrapy.org, by defining two callbacks functions to crawl the website http://blog.scrapinghub.com.
The function parse_blog(response) is going to extract the listing URLs and the function parse_titles(response) is going to extract the post titles from each listing page.
End of explanation
items = scrapydo.crawl('http://blog.scrapinghub.com', parse_blog)
Explanation: Once we have our callback functions for our target website, we simply call to scrapydo.crawl:
End of explanation
df = pd.DataFrame(items)
df['length'] = df['title'].apply(len)
df[:5]
ax = df['length'].plot(kind='hist', bins=11)
ax2 = df['length'].plot(kind='kde', secondary_y=True, ax=ax)
ax2.set(ylabel="density")
ax.set(title="Title length distribution", xlim=(10, 80), ylabel="posts", xlabel="length");
Explanation: Now that we have our data, we can start doing the fun part! Here we show the posts title length distribution.
End of explanation
import os
os.environ['SCRAPY_SETTINGS_MODULE'] = 'dirbot.settings'
Explanation: The run_spider function and running spiders from an existing project
The previous section showed how to do quick crawls to retrieve data. In this section we are going to show how to run spiders from existing scrapy projects, which can be useful for rapid spider prototyping as well as analysing the crawled data from a given spider.
We use a modified dirbot project, which is already accesible through the PYTHONPATH.
End of explanation
import logging
logging.root.setLevel(logging.INFO)
Explanation: We want to see the logging output, just as the scrapy crawl command would do. Hence we set the log level to INFO.
End of explanation
from dirbot.spiders import dmoz
items = scrapydo.run_spider(dmoz.DmozSpider, settings={'CLOSESPIDER_ITEMCOUNT': 500})
Explanation: The function run_spider allows to run any spider class and provide custom settings.
End of explanation
highlight(items[:3], 'python')
from urlparse import urlparse
dmoz_items = pd.DataFrame(items)
dmoz_items['domain'] = dmoz_items['url'].apply(lambda url: urlparse(url).netloc.replace('www.', ''))
ax = dmoz_items.groupby('domain').apply(len).sort(inplace=False)[-10:].plot(kind='bar')
ax.set(title="Top 10 domains")
plt.setp(ax.xaxis.get_majorticklabels(), rotation=30);
Explanation: In this way, we have less friction to use scrapy to data mine the web and quickly start exploring our data.
End of explanation |
748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
data.columns
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
# Defining the sigmoid function for activations
def sigmoid(x):
return 1 / (1 + np.exp(-x))
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
# self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, features, targets):
# Train the network on batch of features and targets.
# features: 2D array, each row is one data record, each column is a feature
# targets: 1D array of target values
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
# The output layer has only one node and is used for the regression,
# the output of the node is the same as the input of the node. That is,
# the activation function is f(x)=x
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error terms - Replace these values with your calculations.
# The derivative of the output activation function f(x)=x is 1
output_error_term = error * 1.0
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
# TODO: Backpropagated error terms - Replace these values with your calculations.
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
# Run a forward pass through the network with input features
# features: 1D array of feature values
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
# The output layer has only one node and is used for the regression,
# the output of the node is the same as the input of the node. That is,
# the activation function is f(x)=x
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 2000
learning_rate = 1.0
hidden_nodes = 4
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fantasy Football Scoring System
| Stat Category | Point Value |
|---------------------|---------------------------|
|Passing Yards | 1 point for every 25 yards|
|Passing TDs | 6 points |
|Passing Interceptions| -2 points |
|Rushing Yeards | 1 point for every 10 yards|
|Rushing TDs | 6 points |
|Receiving Yards | 1 point for every 10 yards|
|Receiving TDs | 6 points |
|Fumbles Lost | -2 points |
QB Ratings
Load the data from the from the qb_games.csv file.
NOTE
Step1: Calculate Fantasy Points
Calculate the fantasy points pased on the table above
Step2: Get the average QB fantasy points by year
Step3: Observation
You can trace a sharp improvement in performance over years 1-6 where the fantasy total points increase from a yearly average of 10 to over 15 points. There is then a plateau through seasons 7 through 14, and a slight uptick at seasons 15-17, but the averages do not break above 20 points on average.
Step4: Observation
The average passes attempted also match the growth in fantasy points in terms of having their
sharpest increases in years 1-6, and generally plateauing after that. One difference is that there really is no general increase in passes attempted years 15-17.
Step5: Observation
The Passer Rating for QB's shows the same type of improvement for years 1-6 and the same plateau for every year after that.
Career Means for Key Statistics
Calculate the career means for each of the statistics charted above and use them as filters for evaluating players above and below the mean.
Step6: Observation
Shifting the data set to only include data for pass attempts above 29 does not greatly change the overall data. | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib as mp
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
qb_games = pd.read_csv('qb_games.csv')
qb_games.columns.values
Explanation: Fantasy Football Scoring System
| Stat Category | Point Value |
|---------------------|---------------------------|
|Passing Yards | 1 point for every 25 yards|
|Passing TDs | 6 points |
|Passing Interceptions| -2 points |
|Rushing Yeards | 1 point for every 10 yards|
|Rushing TDs | 6 points |
|Receiving Yards | 1 point for every 10 yards|
|Receiving TDs | 6 points |
|Fumbles Lost | -2 points |
QB Ratings
Load the data from the from the qb_games.csv file.
NOTE: Games, a count of games for the current season and career_games have been added so that you can differentiate between regular season games (the games used to calculate fantasy stats) and post season or playoff games. Career Games are a count of how many games a player has played for their career. It was added as a dimension to consider when measuring player growth, plateau and decline.
End of explanation
qb_games['Fantasy Points'] = (qb_games['Pass Yds']/25) + (6 * qb_games['Pass TD']) - (2 * qb_games['Pass Int']) + (qb_games['Rush Yds'] /10) + (6 * qb_games['Rush TD'])
qb_fantasy = qb_games[['Name','Career Year', 'Year', 'Game Count', 'Career Games', 'Date', 'Pass Att', 'Pass Yds', 'Pass TD', 'Pass Int', 'Pass Rate', 'Rush Att', 'Rush Yds', 'Rush TD', 'Fantasy Points']]
qb_fantasy.head(10)
Explanation: Calculate Fantasy Points
Calculate the fantasy points pased on the table above:
(qb_games['Pass Yds']/25) + (6 * qb_games['Pass TD']) - (2 * qb_games['Pass Int']) + (qb_games['Rush Yds'] /10) + (6 * qb_games['Rush TD'])
Store the data to be used for initial analysis in the Data Frame qb_fantasy
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
print(len(qb_fantasy))
yearly_fantasy_points = qb_fantasy.groupby(['Career Year'], as_index=False).mean()
yearly_fantasy_points[['Career Year', 'Pass Att', 'Pass Rate', 'Fantasy Points']]
color = ['red']
ax = sns.barplot(yearly_fantasy_points['Career Year'], (yearly_fantasy_points['Fantasy Points'] ), palette=color)
Explanation: Get the average QB fantasy points by year
End of explanation
color = ['blue']
ax = sns.barplot(yearly_fantasy_points['Career Year'], (yearly_fantasy_points['Pass Att'] ), palette=color)
Explanation: Observation
You can trace a sharp improvement in performance over years 1-6 where the fantasy total points increase from a yearly average of 10 to over 15 points. There is then a plateau through seasons 7 through 14, and a slight uptick at seasons 15-17, but the averages do not break above 20 points on average.
End of explanation
color = ['green']
ax = sns.barplot(yearly_fantasy_points['Career Year'], (yearly_fantasy_points['Pass Rate'] ), palette=color)
Explanation: Observation
The average passes attempted also match the growth in fantasy points in terms of having their
sharpest increases in years 1-6, and generally plateauing after that. One difference is that there really is no general increase in passes attempted years 15-17.
End of explanation
qb_means = qb_fantasy[['Pass Att', 'Pass Rate', 'Fantasy Points']].mean()
qb_means
pass_att = qb_means['Pass Att']
qb_upper_pass_att = qb_fantasy.loc[qb_fantasy['Pass Att'] > pass_att]
qb_pass_att_mean = qb_upper_pass_att['Pass Att'].mean()
print('Shifting data to only include pass attempts when greater than %d average pass attempts' %(pass_att))
qb_att = qb_upper_pass_att.groupby(['Career Year'], as_index=False).mean()
color = ['blue']
ax = sns.barplot(qb_att['Career Year'], (qb_att['Pass Att'] ), palette=color)
Explanation: Observation
The Passer Rating for QB's shows the same type of improvement for years 1-6 and the same plateau for every year after that.
Career Means for Key Statistics
Calculate the career means for each of the statistics charted above and use them as filters for evaluating players above and below the mean.
End of explanation
pass_rate = qb_means['Pass Rate']
qb_upper_pass_rate = qb_fantasy.loc[qb_fantasy['Pass Rate'] > pass_rate]
qb_pass_rate_mean = qb_upper_pass_rate['Pass Rate'].mean()
print('Shifting data to only include pass attempts when greater than %d average pass attempts' %(pass_rate))
qb_rate = qb_upper_pass_rate.groupby(['Career Year'], as_index=False).mean()
color = ['green']
ax = sns.barplot(qb_rate['Career Year'], (qb_rate['Pass Rate'] ), palette=color)
qb_upper_fantasy_rate = qb_fantasy.loc[qb_fantasy['Pass Rate'] > pass_rate]
qb_name = qb_upper_fantasy_rate.groupby(['Name'], as_index=False)
print(len(qb_name))
qb_fantasy_rate_mean = qb_upper_fantasy_rate['Fantasy Points'].mean()
print(qb_fantasy_rate_mean)
qb_rate = qb_upper_fantasy_rate.groupby(['Career Year'], as_index=False).mean()
color = ['red']
ax = sns.barplot(qb_rate['Career Year'], (qb_rate['Fantasy Points'] ), palette=color)
qb_upper_pass_rate = qb_fantasy.loc[qb_fantasy['Pass Rate'] > pass_rate]
qb_fantasy_rate = qb_upper_pass_rate.mean()
print(qb_fantasy_rate['Fantasy Points'])
qb_rate = qb_upper_pass_rate.groupby(['Career Year'], as_index=False).mean()
color = ['green']
ax = sns.barplot(qb_rate['Career Year'], (qb_rate['Fantasy Points'] ), palette=color)
Explanation: Observation
Shifting the data set to only include data for pass attempts above 29 does not greatly change the overall data.
End of explanation |
750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BEM method
Step5: Boundary Discretization
we will create a discretization of the body geometry into panels (line segments in 2D). A panel's attributes are
Step6: We create a node distribution on the boundary that is refined near the corner with cosspace function
Step7: Discretize boundary element along the boundary
Here we implement BEM in a squre grid
Step8: Plot boundary elements and wells
Step12: Boundary element implementation
<img src="./resources/BEMscheme2.png" width="400">
<center>Figure 2. Representation of a local gridblock with boundary elements</center>
Generally, the influence of all the j panels on the i BE node can be expressed as follows
Step16: Well source function
Line source solution for pressure and velocity (Datta-Gupta, 2007)
\begin{equation}
P(x,y)=B{{Q}{w}}=-\frac{70.60\mu }{h\sqrt{{{k}{x}}{{k}{y}}}}\ln \left{ {{(x-{{x}{w}})}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}{w}})}^{2}} \right}{{Q}{w}}+{{P}_{avg}}
\end{equation}
\begin{equation}
\frac{\partial P}{\partial x}=u=\frac{0.8936}{h\phi }\sqrt{\frac{{{k}{x}}}{{{k}{y}}}}\sum\limits_{k=1}^{{{N}{w}}}{{{Q}{k}}}\frac{x-{{x}{k}}}{{{\left( x-{{x}{k}} \right)}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}_{k}})}^{2}}}
\end{equation}
\begin{equation}
\frac{\partial P}{\partial y}=v=\frac{0.8936}{h\phi }\sqrt{\frac{{{k}{x}}}{{{k}{y}}}}\sum\limits_{k=1}^{{{N}{w}}}{{{Q}{k}}}\frac{y-{{y}{k}}}{{{\left( x-{{x}{k}} \right)}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}_{k}})}^{2}}}
\end{equation}
Step19: BEM function solution
Generally, the influence of all the j panels on the i BE node can be expressed as follows
Step20: Plot results | Python Code:
#Q = 2000/3 #strength of the source-sheet,stb/d
h=25.26 #thickness of local gridblock,ft
phi=0.2 #porosity
kx=200 #pemerability in x direction,md
ky=200 #pemerability in y direction,md
kr=kx/ky #pemerability ratio
miu=1 #viscosity,cp
Nw=1 #Number of well
Qwell_1=2000 #Flow rate of well 1
Boundary_V=-400 #boundary velocity ft/day
Explanation: BEM method
End of explanation
class Panel:
Contains information related to a panel.
def __init__(self, xa, ya, xb, yb):
Creates a panel.
Arguments
---------
xa, ya -- Cartesian coordinates of the first end-point.
xb, yb -- Cartesian coordinates of the second end-point.
self.xa, self.ya = xa, ya
self.xb, self.yb = xb, yb
self.xc, self.yc = (xa+xb)/2, (ya+yb)/2 # control-point (center-point)
self.length = math.sqrt((xb-xa)**2+(yb-ya)**2) # length of the panel
# orientation of the panel (angle between x-axis and panel)
self.sinalpha=(yb-ya)/self.length
self.cosalpha=(xb-xa)/self.length
self.Q = 0. # source strength
self.U = 0. # velocity component
self.V = 0. # velocity component
self.P = 0. # pressure coefficient
class Well:
Contains information related to a panel.
def __init__(self, xw, yw,rw,Q):
Creates a panel.
Arguments
---------
xw, yw -- Cartesian coordinates of well source.
Q -- Flow rate of well source.
rw -- radius of well source.
self.xw, self.yw = xw, yw
self.Q = Q # source strength
self.rw = rw # velocity component
Explanation: Boundary Discretization
we will create a discretization of the body geometry into panels (line segments in 2D). A panel's attributes are: its starting point, end point and mid-point, its length and its orientation. See the following figure for the nomenclature used in the code and equations below.
<img src="./resources/PanelLocal.png" width="300">
<center>Figure 1. Nomenclature of the boundary element in the local coordinates</center>
Create panel and well class
End of explanation
def cosspace(st,ed,N):
N=N+1
AngleInc=numpy.pi/(N-1)
CurAngle = AngleInc
space=numpy.linspace(0,1,N)
space[0]=st
for i in range(N-1):
space[i+1] = 0.5*numpy.abs(ed-st)*(1 - math.cos(CurAngle));
CurAngle += AngleInc
if ed<st:
space[0]=ed
space=space[::-1]
return space
Explanation: We create a node distribution on the boundary that is refined near the corner with cosspace function
End of explanation
N=80 #Number of boundary element
Nbd=20 #Number of boundary element in each boundary
Dx=1. #Grid block length in X direction
Dy=1. #Gird block lenght in Y direction
#Create the array
x_ends = numpy.linspace(0, Dx, N) # computes a 1D-array for x
y_ends = numpy.linspace(0, Dy, N) # computes a 1D-array for y
interval=cosspace(0,Dx,Nbd)
rinterval=cosspace(Dx,0,Nbd)
#interval=numpy.linspace(0,1,Nbd+1)
#rinterval=numpy.linspace(1,0,Nbd+1)
#Define the rectangle boundary
for i in range(Nbd):
x_ends[i]=0
y_ends[i]=interval[i]
for i in range(Nbd):
x_ends[i+Nbd]=interval[i]
y_ends[i+Nbd]=Dy
for i in range(Nbd):
x_ends[i+Nbd*2]=Dx
y_ends[i+Nbd*2]=rinterval[i]
for i in range(Nbd):
x_ends[i+Nbd*3]=rinterval[i]
y_ends[i+Nbd*3]=0
x_ends,y_ends=numpy.append(x_ends, x_ends[0]), numpy.append(y_ends, y_ends[0])
#Define the panel
panels = numpy.empty(N, dtype=object)
for i in range(N):
panels[i] = Panel(x_ends[i], y_ends[i], x_ends[i+1], y_ends[i+1])
#Define the well
wells = numpy.empty(Nw, dtype=object)
wells[0]=Well(Dx/2,Dy/2,0.025,Qwell_1)
#for i in range(N):
# print("Panel Coordinate (%s,%s) sina,cosa (%s,%s) " % (panels[i].xc,panels[i].yc,panels[i].sinalpha,panels[i].cosalpha))
#print("Well Location (%s,%s) radius: %s Flow rate:%s " % (wells[0].xw,wells[0].yw,wells[0].rw,wells[0].Q))
Explanation: Discretize boundary element along the boundary
Here we implement BEM in a squre grid
End of explanation
#Plot the panel
%matplotlib inline
val_x, val_y = 0.3, 0.3
x_min, x_max = min(panel.xa for panel in panels), max(panel.xa for panel in panels)
y_min, y_max = min(panel.ya for panel in panels), max(panel.ya for panel in panels)
x_start, x_end = x_min-val_x*(x_max-x_min), x_max+val_x*(x_max-x_min)
y_start, y_end = y_min-val_y*(y_max-y_min), y_max+val_y*(y_max-y_min)
size = 5
pyplot.figure(figsize=(size, (y_end-y_start)/(x_end-x_start)*size))
pyplot.grid(True)
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.plot(numpy.append([panel.xa for panel in panels], panels[0].xa),
numpy.append([panel.ya for panel in panels], panels[0].ya),
linestyle='-', linewidth=1, marker='o', markersize=6, color='#CD2305');
pyplot.scatter(wells[0].xw,wells[0].yw,s=100,alpha=0.5)
pyplot.legend(['panels', 'Wells'],
loc=1, prop={'size':12})
Explanation: Plot boundary elements and wells
End of explanation
#Panel infuence factor Bij
def InflueceP(x, y, panel):
Evaluates the contribution of a panel at one point.
Arguments
---------
x, y -- Cartesian coordinates of the point.
panel -- panel which contribution is evaluated.
Returns
-------
Integral over the panel of the influence at one point.
#Transfer global coordinate point(x,y) to local coordinate
x=x-panel.xa
y=y-panel.ya
L1=panel.length
#Calculate the pressure and velocity influence factor
a=panel.cosalpha**2+kr*panel.sinalpha**2
b=x*panel.cosalpha+kr*panel.sinalpha*y
c=y*panel.cosalpha-x*panel.sinalpha
dp=70.6*miu/h/math.sqrt(kx*ky)
Cp = dp/a*(
(
b*math.log(x**2-2*b*L1+a*L1**2+kr*y**2)
-L1*a*math.log((x-L1*panel.cosalpha)**2+kr*(y-L1*panel.sinalpha)**2)
+2*math.sqrt(kr)*c*math.atan((b-a*L1)/math.sqrt(kr)/c)
)
-
(
b*math.log(x**2+kr*y**2)
+2*math.sqrt(kr)*c*math.atan((b)/math.sqrt(kr)/c)
)
)
#debug
#print("a: %s b:%s c:%s " % (a,b,c))
#angle=math.atan((b-a*L1)/math.sqrt(kr)/c)*180/numpy.pi
#print("Magic angle:%s"% angle)
return Cp
def InflueceU(x, y, panel):
Evaluates the contribution of a panel at one point.
Arguments
---------
x, y -- Cartesian coordinates of the point.
panel -- panel which contribution is evaluated.
Returns
-------
Integral over the panel of the influence at one point.
#Transfer global coordinate point(x,y) to local coordinate
x=x-panel.xa
y=y-panel.ya
L1=panel.length
#Calculate the pressure and velocity influence factor
a=panel.cosalpha**2+kr*panel.sinalpha**2
b=x*panel.cosalpha+kr*panel.sinalpha*y
c=y*panel.cosalpha-x*panel.sinalpha
dv=-0.4468/h/phi*math.sqrt(kx/ky)
Cu = dv/a*(
(
panel.cosalpha*math.log(x**2-2*b*L1+a*L1**2+kr*y**2)+ 2*math.sqrt(kr)*panel.sinalpha*math.atan((a*L1-b)/math.sqrt(kr)/c)
)
-
(
panel.cosalpha*math.log(x**2+kr*y**2)+2*math.sqrt(kr)*panel.sinalpha*math.atan((-b)/math.sqrt(kr)/c)
)
)
#print("a: %s b:%s c:%s " % (a,b,c))
#angle=math.atan((b-a*L1)/math.sqrt(kr)/c)*180/numpy.pi
#print("Magic angle:%s"% angle)
return Cu
def InflueceV(x, y, panel):
Evaluates the contribution of a panel at one point.
Arguments
---------
x, y -- Cartesian coordinates of the point.
panel -- panel which contribution is evaluated.
Returns
-------
Integral over the panel of the influence at one point.
#Transfer global coordinate point(x,y) to local coordinate
x=x-panel.xa
y=y-panel.ya
L1=panel.length
#Calculate the pressure and velocity influence factor
a=panel.cosalpha**2+kr*panel.sinalpha**2
b=x*panel.cosalpha+kr*panel.sinalpha*y
c=y*panel.cosalpha-x*panel.sinalpha
dv=-0.4468/h/phi*math.sqrt(kx/ky)
Cv = dv/a*(
(
panel.sinalpha*math.log(x**2-2*b*L1+a*L1**2+kr*y**2)+ 2*math.sqrt(1/kr)*panel.cosalpha*math.atan((b-a*L1)/math.sqrt(kr)/c)
)
-
(
panel.sinalpha*math.log(x**2+kr*y**2)+2*math.sqrt(1/kr)*panel.cosalpha*math.atan((b)/math.sqrt(kr)/c)
)
)
#print("a: %s b:%s c:%s " % (a,b,c))
#angle=math.atan((b-a*L1)/math.sqrt(kr)/c)*180/numpy.pi
#print("Magic angle:%s"% angle)
return Cv
Explanation: Boundary element implementation
<img src="./resources/BEMscheme2.png" width="400">
<center>Figure 2. Representation of a local gridblock with boundary elements</center>
Generally, the influence of all the j panels on the i BE node can be expressed as follows:
\begin{matrix}
{{c}{ij}}{{p}{i}}+{{p}{i}}\int{{{s}{j}}}{{{H}{ij}}d{{s}{j}}}=({{v}{i}}\cdot \mathbf{n})\int_{{{s}{j}}}{{{G}{ij}}}d{{s}_{j}}
\end{matrix}
Where,
${{c}_{ij}}$ is the free term, cased by source position.
<center>${{c}_{ij}}=\left{ \begin{matrix}
\begin{matrix}
1 & \text{source j on the internal domain} \
\end{matrix} \
\begin{matrix}
0.5 & \text{source j on the boundary} \
\end{matrix} \
\begin{matrix}
0 & \text{source j on the external domain} \
\end{matrix} \
\end{matrix} \right.$</center>
$\int_{{{s}{j}}}{{{H}{ij}}d{{s}_{j}}\text{ }}$ is the integrated effect of the boundary element source i on the resulting normal flux at BE node j.
$\int_{{{s}{j}}}{{{G}{ij}}}d{{s}_{j}}$ is the is the integrated effect of the boundary element source i on the resulting pressure at BE node j
Line segment source solution for pressure and velocity (Derived recently)
The integrated effect can be formulated using line segment source solution, which givs:
\begin{equation}
\int_{{{s}{j}}}{{{G}{ij}}}d{{s}{j}}=B{{Q}{w}}=P({{{x}'}{i}},{{{y}'}{i}})=-\frac{70.60\mu }{h\sqrt{{{k}{x}}{{k}{y}}}}\int_{t=0}^{t={{l}{j}}}{\ln \left{ {{({x}'-t\cos {{\alpha }{j}})}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{({y}'-t\sin {{\alpha }{j}})}^{2}} \right}dt}\cdot {{Q}{w}}
\end{equation}
\begin{equation}
\int_{{{s}{j}}}{{{H}{ij}}d{{s}{j}}\text{ }}={{v}{i}}(s)\cdot {{\mathbf{n}}{i}}=-{{u}{i}}\sin {{\alpha }{i}}+{{v}{i}}\cos {{\alpha }_{i}}
\end{equation}
Where,
\begin{equation}
u\left( {{{{x}'}}{i}},{{{{y}'}}{i}} \right)={{A}{u}}{{Q}{j}}=\frac{0.8936}{h\phi }\sqrt{\frac{{{k}{x}}}{{{k}{y}}}}\int_{t=0}^{t={{l}{j}}}{\frac{{{{{x}'}}{i}}-t\cos {{\alpha }{j}}}{{{\left( {{{{x}'}}{i}}-t\cos {{\alpha }{j}} \right)}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{({{{{y}'}}{i}}-t\sin {{\alpha }{j}})}^{2}}}dt}\cdot {{Q}{j}}
\end{equation}
\begin{equation}
v\left( {{{{x}'}}{i}},{{{{y}'}}{i}} \right)={{A}{v}}{{Q}{j}}=\frac{0.8936}{h\phi }\sqrt{\frac{{{k}{x}}}{{{k}{y}}}}\int_{t=0}^{t={{l}{j}}}{\frac{{{{{y}'}}{i}}-t\sin {{\alpha }{j}}}{{{\left( {{{{x}'}}{i}}-t\cos {{\alpha }{j}} \right)}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{({{{{y}'}}{i}}-t\sin {{\alpha }{j}})}^{2}}}dt}\cdot {{Q}{j}}
\end{equation}
Line segment source Integration function (Bij and Aij)
End of explanation
#Well influence factor
def InflueceP_W(x, y, well):
Evaluates the contribution of a panel at one point.
Arguments
---------
x, y -- Cartesian coordinates of the point.
panel -- panel which contribution is evaluated.
Returns
-------
Integral over the panel of the influence at one point.
dp=-70.6*miu/h/math.sqrt(kx*ky)
Cp=dp*math.log((x-well.xw)**2+kr*(y-well.yw)**2)
return Cp
def InflueceU_W(x, y, well):
Evaluates the contribution of a panel at one point.
Arguments
---------
x, y -- Cartesian coordinates of the point.
panel -- panel which contribution is evaluated.
Returns
-------
Integral over the panel of the influence at one point.
dv=0.8936/h/phi*math.sqrt(kx/ky)
Cu=dv*(x-well.xw)/((x-well.xw)**2+kr*(y-well.yw)**2)
return Cu
def InflueceV_W(x, y, well):
Evaluates the contribution of a panel at one point.
Arguments
---------
x, y -- Cartesian coordinates of the point.
panel -- panel which contribution is evaluated.
Returns
-------
Integral over the panel of the influence at one point.
dv=0.8936/h/phi*math.sqrt(kx/ky)
Cv=dv*(y-well.yw)/((x-well.xw)**2+kr*(y-well.yw)**2)
return Cv
#InflueceV(0.5,1,panels[3])
#InflueceP(0,0.5,panels[0])
#InflueceU(0,0.5,panels[0])
Explanation: Well source function
Line source solution for pressure and velocity (Datta-Gupta, 2007)
\begin{equation}
P(x,y)=B{{Q}{w}}=-\frac{70.60\mu }{h\sqrt{{{k}{x}}{{k}{y}}}}\ln \left{ {{(x-{{x}{w}})}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}{w}})}^{2}} \right}{{Q}{w}}+{{P}_{avg}}
\end{equation}
\begin{equation}
\frac{\partial P}{\partial x}=u=\frac{0.8936}{h\phi }\sqrt{\frac{{{k}{x}}}{{{k}{y}}}}\sum\limits_{k=1}^{{{N}{w}}}{{{Q}{k}}}\frac{x-{{x}{k}}}{{{\left( x-{{x}{k}} \right)}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}_{k}})}^{2}}}
\end{equation}
\begin{equation}
\frac{\partial P}{\partial y}=v=\frac{0.8936}{h\phi }\sqrt{\frac{{{k}{x}}}{{{k}{y}}}}\sum\limits_{k=1}^{{{N}{w}}}{{{Q}{k}}}\frac{y-{{y}{k}}}{{{\left( x-{{x}{k}} \right)}^{2}}+\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}_{k}})}^{2}}}
\end{equation}
End of explanation
def build_matrix(panels):
Builds the source matrix.
Arguments
---------
panels -- array of panels.
Returns
-------
A -- NxN matrix (N is the number of panels).
N = len(panels)
A = numpy.empty((N, N), dtype=float)
#numpy.fill_diagonal(A, 0.5)
for i, p_i in enumerate(panels): #target nodes
for j, p_j in enumerate(panels): #BE source
#if i != j: ###Matrix construction
if i>=0 and i<Nbd or i>=3*Nbd and i<4*Nbd:
A[i,j] = -p_j.sinalpha*InflueceU(p_i.xc, p_i.yc, p_j)+p_j.cosalpha*InflueceV(p_i.xc, p_i.yc, p_j)
#A[i,j] = InflueceP(p_i.xc, p_i.yc, p_j)
if i>=Nbd and i<2*Nbd or i>=2*Nbd and i<3*Nbd:
A[i,j] = -p_j.sinalpha*InflueceU(p_i.xc, p_i.yc, p_j)+p_j.cosalpha*InflueceV(p_i.xc, p_i.yc, p_j)
#A[i,j] = InflueceP(p_i.xc, p_i.yc, p_j)
return A
def build_rhs(panels):
Builds the RHS of the linear system.
Arguments
---------
panels -- array of panels.
Returns
-------
b -- 1D array ((N+1)x1, N is the number of panels).
b = numpy.empty(len(panels), dtype=float)
for i, panel in enumerate(panels):
V_well=( -panel.sinalpha*Qwell_1*InflueceU_W(panel.xc, panel.yc, wells[0])+panel.cosalpha*Qwell_1*InflueceV_W(panel.xc, panel.yc, wells[0]) )
if i>=0 and i<Nbd:
b[i]=0+V_well
#b[i]=4000
#b[i]=84
if i>=Nbd and i<2*Nbd:
b[i]=-V_well
#b[i]=-42
if i>=2*Nbd and i<3*Nbd:
b[i]=-V_well
#b[i]=-42
if i>=3*Nbd and i<4*Nbd:
b[i]=0+V_well
#b[i]=84
return b
#Qwell_1=300 #Flow rate of well 1
#Boundary_V=-227 #boundary velocity ft/day
A = build_matrix(panels) # computes the singularity matrix
b = build_rhs(panels) # computes the freestream RHS
# solves the linear system
Q = numpy.linalg.solve(A, b)
for i, panel in enumerate(panels):
panel.Q = Q[i]
Explanation: BEM function solution
Generally, the influence of all the j panels on the i BE node can be expressed as follows:
\begin{matrix}
{{c}{ij}}{{p}{i}}+{{p}{i}}\int{{{s}{j}}}{{{H}{ij}}d{{s}{j}}}=({{v}{i}}\cdot \mathbf{n})\int_{{{s}{j}}}{{{G}{ij}}}d{{s}_{j}}
\end{matrix}
Applying boundary condition along the boundary on above equation, a linear systsem can be constructed as follows:
\begin{matrix}
\left[ {{{{H}'}}{ij}} \right]\left[ {{P}{i}} \right]=\left[ {{G}{ij}} \right]\left[ {{v}{i}}\cdot \mathbf{n} \right]
\end{matrix}
!!!!!MY IMPLEMENTATION MAY HAS SOME PROBLEM HERE!!!!!!
All the integration solution can be evaluated except on itself. Where,
<center>$
\left[ {{{{H}'}}{ij}} \right]=\left{ \begin{matrix}
\begin{matrix}
{{H}{ij}} & i\ne j \
\end{matrix} \
\begin{matrix}
{{H}_{ij}}+\frac{1}{2} & i=j \
\end{matrix} \
\end{matrix} \right.
$</center>
<img src="./resources/BEMscheme.png" width="400">
<center>Figure 3. Representation of coordinate systems and the principle of superstition with well source and boundary element source </center>
As shown in Fig.3, the pressure and velocity at any point i in the local gridblock can be determined using Eqs. below. Applying principle of superposition for each BE node along the boundary (Fig. 3), boundary condition can be written as follows:
\begin{matrix}
{{P}{i}}(s)=\sum\limits{j=1}^{M}{{{B}{ij}}{{Q}{j}}} & \text{constant pressure boundary} \
\end{matrix}
\begin{matrix}
{{v}{i}}(s)\cdot {{\mathbf{n}}{i}}=\sum\limits_{j=1}^{M}{{{A}{ij}}{{Q}{j}}} & \text{constant flux boundary} \
\end{matrix}
The Pi and v ·n are the konwn boundary codition. The flow rate(strength) of boundary elements in Hij and Gij are the only unknown terms.
So we could rearrange the matrix above as linear system:
<center>$
{{\left[ \begin{matrix}
{{A}{ij}} \
{{B}{ij}} \
\end{matrix} \right]}{N\times N}}{{\left[ \begin{matrix}
{{Q}{j}} \
{{Q}{j}} \
\end{matrix} \right]}{N\times 1}}={{\left[ \begin{matrix}
-{{u}{i}}\sin {{\alpha }{i}}+{{v}{i}}\cos {{\alpha }{i}} \
{{P}{i}} \
\end{matrix} \right]}{N\times 1}}
$</center>
End of explanation
#Visulize the pressure and velocity field
#Define meshgrid
Nx, Ny = 50, 50 # number of points in the x and y directions
x_start, x_end = -0.01, 1.01 # x-direction boundaries
y_start, y_end = -0.01, 1.01 # y-direction boundaries
x = numpy.linspace(x_start, x_end, Nx) # computes a 1D-array for x
y = numpy.linspace(y_start, y_end, Ny) # computes a 1D-array for y
X, Y = numpy.meshgrid(x, y) # generates a mesh grid
#Calculate the velocity and pressure field
p = numpy.empty((Nx, Ny), dtype=float)
u = numpy.empty((Nx, Ny), dtype=float)
v = numpy.empty((Nx, Ny), dtype=float)
#for i, panel in enumerate(panels):
#panel.Q = 0.
#panels[0].Q=100
#panels[5].Q=100
#Qwell_1=400
for i in range(Nx):
for j in range(Ny):
p[i,j] =sum([p.Q*InflueceP(X[i,j], Y[i,j], p) for p in panels])+Qwell_1*InflueceP_W(X[i,j], Y[i,j], wells[0])
u[i,j] =sum([p.Q*InflueceU(X[i,j], Y[i,j], p) for p in panels])+Qwell_1*InflueceU_W(X[i,j], Y[i,j], wells[0])
v[i,j] =sum([p.Q*InflueceV(X[i,j], Y[i,j], p) for p in panels])+Qwell_1*InflueceV_W(X[i,j], Y[i,j], wells[0])
#p[i,j] =sum([p.Q*InflueceP(X[i,j], Y[i,j], p) for p in panels])
#u[i,j] =sum([p.Q*InflueceU(X[i,j], Y[i,j], p) for p in panels])
#v[i,j] =sum([p.Q*InflueceV(X[i,j], Y[i,j], p) for p in panels])
#p[i,j] =Qwell_1*InflueceP_W(X[i,j], Y[i,j], wells[0])
#u[i,j] =Qwell_1*InflueceU_W(X[i,j], Y[i,j], wells[0])
#v[i,j] =Qwell_1*InflueceV_W(X[i,j], Y[i,j], wells[0])
# plots the streamlines
%matplotlib inline
size = 6
pyplot.figure(figsize=(size, size))
pyplot.grid(True)
pyplot.title('Streamline field')
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(-0.2, 1.2)
pyplot.ylim(-0.2, 1.2)
pyplot.plot(numpy.append([panel.xa for panel in panels], panels[0].xa),
numpy.append([panel.ya for panel in panels], panels[0].ya),
linestyle='-', linewidth=1, marker='o', markersize=6, color='#CD2305');
stream =pyplot.streamplot(X, Y, u, v,density=2, linewidth=1, arrowsize=1, arrowstyle='->') #streamline
#cbar=pyplot.colorbar(orientation='vertical')
#equipotential=pyplot.contourf(X, Y, p1, extend='both')
size = 7
pyplot.figure(figsize=(size, size-1))
pyplot.title('Pressure field')
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(0, 1)
pyplot.ylim(0, 1)
pyplot.contour(X, Y, p, 15, linewidths=0.5, colors='k')
pyplot.contourf(X, Y, p, 15, cmap='rainbow',
vmax=abs(p).max(), vmin=-abs(p).max())
pyplot.colorbar() # draw colorbar
size = 7
pyplot.figure(figsize=(size, size-1))
pyplot.title('Total Velocity field')
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(0, 1)
pyplot.ylim(0, 1)
Vtotal= numpy.sqrt(u**2+v**2)
#Vtotal= numpy.abs(v)
pyplot.contour(X, Y, Vtotal, 15, linewidths=0.5, colors='k')
pyplot.contourf(X, Y, Vtotal, 15, cmap='rainbow')
#vmax=50, vmin=0)
pyplot.colorbar() # draw colorbar
pyplot.title('Darcy velocity on the outflow boundary, x component (ft/day)')
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.plot(y, u[49,:], '--', linewidth=2)
pyplot.plot(9.8425+y, u[:,49], '--', linewidth=2)
u[:,49]
pyplot.title('Darcy velocity on the outflow boundary, y component (ft/day)')
pyplot.plot(y, v[:,49], '--', linewidth=2)
pyplot.plot(9.8425+y, v[49,:], '--', linewidth=2)
v[49,:]
Explanation: Plot results
End of explanation |
751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 2 (DUE
Step1: Question 2
For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 12$. For each, assume that $y_0 = 0$.
$y_t = 1 + 0.5y_{t-1}$
$y_t = 0.5y_{t-1}$
$y_t = -1 + 0.5y_{t-1}$
Plot the the simulated values for each process on the same axes and be sure to include a legend. Set the $y$-axis limits to $[-3,3]$.
Step2: Question 3
Download a file called Econ129_US_Production_A_Data.csv from the link "Production data for the US" under the "Data" section on the course website. The file contains annual production data for the US economy including ouput, consumption, investment, and labor hours, among others. The capital stock of the US is only given for 1948. Import the data into a Pandas DataFrame and do the following
Step3: Question 4
Step4: Question 5
Recall the Solow growth model with exogenous growth in labor and TFP | Python Code:
# Question 1
T = 20
w = np.zeros(T)
w[0] = 1
def firstDiff(mu,rho,w,y0,T):
y = np.zeros(T+1)
y[0] = y0
for t in range(T):
y[t+1] = (1-rho)*mu + rho*y[t] + w[t]
return y
y1 = firstDiff(mu=0,rho=0.99,w=w,y0=0,T=T)
y2 = firstDiff(mu=0,rho=1,w=w,y0=0,T=T)
y3 = firstDiff(mu=0,rho=1.01,w=w,y0=0,T=T)
plt.plot(y1,lw=3,alpha = 0.65,label = '$\\rho = 0.99$')
plt.plot(y2,lw=3,alpha = 0.65,label = '$\\rho = 1$')
plt.plot(y3,lw=3,alpha = 0.65,label = '$\\rho = 1.01$')
plt.legend(loc='lower right')
plt.grid()
plt.title('Three first difference processes')
Explanation: Homework 2 (DUE: Thursday February 16)
Instructions: Complete the instructions in this notebook. You may work together with other students in the class and you may take full advantage of any internet resources available. You must provide thorough comments in your code so that it's clear that you understand what your code is doing and so that your code is readable.
Submit the assignment by saving your notebook as an html file (File -> Download as -> HTML) and uploading it to the appropriate Dropbox folder on EEE.
Question 1
For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 20$. For each, assume that $y_0 = 0$, $w_1 = 1$, and $w_2 = w_3 = \cdots w_T = 0$.
$y_t = 0.99y_{t-1} + w_t$
$y_t = y_{t-1} + w_t$
$y_t = 1.01y_{t-1} + w_t$
Plot the the simulated values for each process on the same axes and be sure to include a legend.
End of explanation
# Question 2
T = 12
w = np.zeros(T)
y1 = firstDiff(mu=1/(1-0.5),rho=0.5,w=w,y0=0,T=T)
y2 = firstDiff(mu=0,rho=0.5,w=w,y0=0,T=T)
y3 = firstDiff(mu=-1/(1-0.5),rho=0.5,w=w,y0=0,T=T)
plt.plot(y1,lw=3,alpha = 0.65,label = '$\\rho = 0.99$')
plt.plot(y2,lw=3,alpha = 0.65,label = '$\\rho = 1$')
plt.plot(y3,lw=3,alpha = 0.65,label = '$\\rho = 1.01$')
plt.legend(ncol=3)
plt.grid()
plt.ylim([-3,3])
plt.title('Three first difference processes')
Explanation: Question 2
For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 12$. For each, assume that $y_0 = 0$.
$y_t = 1 + 0.5y_{t-1}$
$y_t = 0.5y_{t-1}$
$y_t = -1 + 0.5y_{t-1}$
Plot the the simulated values for each process on the same axes and be sure to include a legend. Set the $y$-axis limits to $[-3,3]$.
End of explanation
# Question 3.1
df = pd.read_csv('Econ129_US_Production_A_Data.csv',index_col=0)
delta = 0.0375
T = len(df.index)
capital = df['Capital [Bil. of 2009 Dollars]'].values
investment = df['Investment [Bil. of 2009 Dollars]'].values
for t in range(T-1):
capital[t+1] = investment[t] + (1-delta)*capital[t]
df['Capital [Bil. of 2009 Dollars]'] = capital
df['Capital [Bil. of 2009 Dollars]'].plot(lw=3,alpha=0.65,grid=True)
plt.title('US capital stock')
plt.ylabel('[Bil. of 2009 Dollars]')
# Question 3.2
df['Output per worker'] = df['Output [Bil. of 2009 Dollars]']/df['Labor [Mil. of Hours]']
df['Capital per worker'] = df['Capital [Bil. of 2009 Dollars]']/df['Labor [Mil. of Hours]']
print(df.head())
# Question 3.3
T = len(df.index-1)
gy = (df['Output per worker'].iloc[-1]/df['Output per worker'].iloc[0])**(1/T)-1
gk = (df['Capital per worker'].iloc[-1]/df['Capital per worker'].iloc[0])**(1/T)-1
print('Average growth rate of output per worker: ',round(gy,5))
print('Average growth rate of capital per worker:',round(gk,5))
Explanation: Question 3
Download a file called Econ129_US_Production_A_Data.csv from the link "Production data for the US" under the "Data" section on the course website. The file contains annual production data for the US economy including ouput, consumption, investment, and labor hours, among others. The capital stock of the US is only given for 1948. Import the data into a Pandas DataFrame and do the following:
Suppose that the depreciation rate for the US is $\delta = 0.0375$. Use the capital accumulation equation $K_{t+1} = I_t + (1-\delta)K_t$ to fill in the missing values for the capital column. Construct a plot of the computed capital stock.
Add columns to your DataFrame equal to capital per worker and output per worker by dividing the capital and output columns by the labor column. Print the first five rows of the DataFrame.
Print the average annual growth rates of capital per worker and output per worker for the US.
Recall that the average annnual growth rate of a quantity $y$ from date $0$ to date $T$ is:
\begin{align}
g & = \left(\frac{y_T}{y_0}\right)^{\frac{1}{T}}-1
\end{align}
End of explanation
# Initialize parameters for the simulation (A, s, T, delta, alpha, g, n, K0, A0, L0)
K0 = 2
A0 = 1
L0 = 1
T= 100
A= 10
alpha = 0.35
delta = 0.1
s = 0.15
g = 0.015
n = 0.01
# Initialize a variable called tfp as a (T+1)x1 array of zeros and set first value to A0
tfp = np.zeros(T+1)
tfp[0] = A0
# Compute all subsequent tfp values by iterating over t from 0 through T
for t in np.arange(T):
tfp[t+1] = (1+g)*tfp[t]
# Plot the simulated tfp series
plt.plot(tfp,lw=3)
plt.grid()
plt.title('TFP')
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0
labor = np.zeros(T+1)
labor[0] = L0
# Compute all subsequent labor values by iterating over t from 0 through T
for t in np.arange(T):
labor[t+1] = (1+n)*labor[t]
# Plot the simulated labor series
plt.plot(labor,lw=3)
plt.grid()
plt.title('Labor')
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
capital = np.zeros(T+1)
capital[0] = K0
# Compute all subsequent capital values by iterating over t from 0 through T
for t in np.arange(T):
capital[t+1] = s*tfp[t]*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]
# Plot the simulated capital series
plt.plot(capital,lw=3)
plt.grid()
plt.title('Capital')
# Store the simulated capital, labor, and tfp data in a pandas DataFrame called data
data = pd.DataFrame({'capital':capital,'labor':labor,'TFP':tfp})
# Print the first 5 frows of the DataFrame
print(data.head())
# Create columns in the DataFrame to store computed values of the other endogenous variables: Y, C, and I
data['output'] = data['TFP']*data['capital']**alpha*data['labor']**(1-alpha)
data['consumption'] = (1-s)*data['output']
data['investment'] = data['output'] - data['consumption']
# Print the first five rows of the DataFrame
print(data.head())
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
data['capital_pw'] = data['capital']/data['labor']
data['output_pw'] = data['output']/data['labor']
data['consumption_pw'] = data['consumption']/data['labor']
data['investment_pw'] = data['investment']/data['labor']
# Print the first five rows of the DataFrame
print(data.head())
# Create a 2x2 grid of plots of capital, output, consumption, and investment
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(data['capital'],lw=3)
ax.grid()
ax.set_title('Capital')
ax = fig.add_subplot(2,2,2)
ax.plot(data['output'],lw=3)
ax.grid()
ax.set_title('Output')
ax = fig.add_subplot(2,2,3)
ax.plot(data['consumption'],lw=3)
ax.grid()
ax.set_title('Consumption')
ax = fig.add_subplot(2,2,4)
ax.plot(data['investment'],lw=3)
ax.grid()
ax.set_title('Investment')
# Create a 2x2 grid of plots of capital per worker, output per worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(data['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(data['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(data['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(data['investment_pw'],lw=3)
ax.grid()
ax.set_title('Investment per worker')
Explanation: Question 4: The Solow model with exogenous population and TFP growth
Suppose that the aggregate production function is given by:
\begin{align}
Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1}
\end{align}
where $Y_t$ denotes output, $K_t$ denotes the capital stock, $L_t$ denotes the labor supply, and $A_t$ denotes total factor productivity $TFP$. $\alpha$ is a constant.
The supply of labor grows at an exogenously determined rate $n$ and so it's value is determined recursively by a first-order difference equation:
\begin{align}
L_{t+1} & = (1+n) L_t. \tag{2}
\end{align}
Likewise, TFP grows at an exogenously determined rate $g$:
\begin{align}
A_{t+1} & = (1+g) A_t. \tag{3}
\end{align}
The rest of the economy is characterized by the same equations as before:
\begin{align}
C_t & = (1-s)Y_t \tag{4}\
Y_t & = C_t + I_t \tag{5}\
K_{t+1} & = I_t + ( 1- \delta)K_t. \tag{6}\
\end{align}
Equation (4) is the consumption function where $s$ denotes the exogenously given saving rate. Equation (5) is the aggregate market clearing condition. Finally, Equation (6) is the capital evolution equation specifying that capital in year $t+1$ is the sum of newly created capital $I_t$ and the capital stock from year $t$ that has not depreciated $(1-\delta)K_t$.
Combine Equations (1) and (4) through (6) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a recurrence relation specifying $K_{t+1}$ as a funtion of $K_t$, $A_t$, and $L_t$:
\begin{align}
K_{t+1} & = sA_tK_t^{\alpha}L_t^{1-\alpha} + ( 1- \delta)K_t \tag{7}
\end{align}
Given an initial values for capital and labor, Equations (2), (3), and (7) can be iterated on to compute the values of the capital stock and labor supply at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (1), (4), (5), and (6).
Simulation
Simulate the Solow growth model with exogenous labor growth for $t=0\ldots 100$. For the simulation, assume the following values of the parameters:
\begin{align}
A & = 10\
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
g & = 0.015 \
n & = 0.01
\end{align}
Furthermore, suppose that the initial values of capital and labor are:
\begin{align}
K_0 & = 2\
A_0 & = 1\
L_0 & = 1
\end{align}
End of explanation
# Question 5.1
def solow_sim(alpha,delta,s,g,n,A0,K0,L0,T):
'''Returns DataFrame with simulated values for a Solow model with labor growth and TFP growth'''
# Initialize a variable called tfp as a (T+1)x1 array of zeros and set first value to k0
tfp = np.zeros(T+1)
tfp[0] = A0
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to k0
capital = np.zeros(T+1)
capital[0] = K0
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to l0
labor = np.zeros(T+1)
labor[0] = L0
# Compute all capital and labor values by iterating over t from 0 through T
for t in np.arange(T):
labor[t+1] = (1+n)*labor[t]
tfp[t+1] = (1+g)*tfp[t]
capital[t+1] = s*tfp[t]*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]
# Store the simulated capital df in a pandas DataFrame called data
df = pd.DataFrame({'capital':capital,'labor':labor,'tfp':tfp})
# Create columns in the DataFrame to store computed values of the other endogenous variables
df['output'] = df['tfp']*df['capital']**alpha*df['labor']**(1-alpha)
df['consumption'] = (1-s)*df['output']
df['investment'] = df['output'] - df['consumption']
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
df['capital_pw'] = df['capital']/df['labor']
df['output_pw'] = df['output']/df['labor']
df['consumption_pw'] = df['consumption']/df['labor']
df['investment_pw'] = df['investment']/df['labor']
return df
westeros = solow_sim(alpha=0.35,delta=0.1,s=0.15,g=0.03,n=0.01,A0=10,K0=20,L0=1,T=100)
essos = solow_sim(alpha=0.35,delta=0.1,s=0.15,g=0.01,n=0.01,A0=10,K0=20,L0=1,T=100)
for t in range(200):
if westeros['output_pw'].iloc[t]>=2*essos['output_pw'].iloc[t]:
print(t)
break
print(westeros[westeros['output_pw']>=2*essos['output_pw']].index[0])
# Question 5.2
westeros['output_pw'].plot(lw=3,alpha = 0.65,label='Westeros')
essos['output_pw'].plot(lw=3,alpha = 0.65,label='Essos')
plt.grid()
plt.title('Output per worker in Westeros and Essos')
plt.legend(loc='upper left')
Explanation: Question 5
Recall the Solow growth model with exogenous growth in labor and TFP:
\begin{align}
Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
Y_t & = C_t + I_t \tag{3}\
K_{t+1} & = I_t + ( 1- \delta)K_t \tag{4}\
L_{t+1} & = (1+n) L_t \tag{5} \
A_{t+1} & = (1+g) A_t. \tag{6}
\end{align}
Suppose that two countries called Westeros and Essos are identical except that TFP in Westeros grows faster than in Essos. Specifically:
\begin{align}
g_{Westeros} & = 0.03\
g_{Essos} & = 0.01
\end{align}
Otherwise, the parameters for each economy are the same including the initial values of capital, labor, and TFP:
\begin{align}
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
n & = 0.01\
K_0 & = 20\
A_0 & = 10\
L_0 & = 1
\end{align}
Do the following:
Find the date (value for $t$) at which output per worker in Westeros becomes at least twice as large as output per worker in Essos. Print the value for t and the values of ouput per worker for each country.
On a single set of axes, plot simulated values of output per worker for each country for t = $1, 2, \ldots 100$.
Hint: Copy into this notebook the function that simulates the Solow model with exogenous labor growth from the end of the Notebook from Class 9. Modify the function to fit this problem.
End of explanation |
752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In the tutorial, you learned about human-centered design (HCD) and became familiar with six general steps to apply it to AI systems. In this exercise, you will identify and address design issues in six interesting AI use cases.
Introduction
Begin by running the next code cell.
- Clicking inside the code cell.
- Click on the triangle (in the shape of a "Play button") that appears to the left of the code cell.
- If your code is run successfully, you will see Setup Complete as output below the cell.
Step1: 1) Reducing plastic waste
A Cambodian organization wants to help reduce the significant amounts of plastic waste that pollute the Mekong River System. Which of the following would be an appropriate way to start? (Your answer might use more than one option.)
Watch the people currently addressing the problem as they navigate existing tools and processes.
Conduct individual interviews with the people currently addressing the problem.
Assemble focus groups that consist of people currently addressing the problem.
After you have answered the question, view the official solution by running the code cell below.
Step2: 2) Detecting breast cancer
Pathologists try to detect breast cancer by examining cells on tissue slides under microscopes. This tiring and repetitive work requires an expert eye. Your team wants to create a technology solution that helps pathologists with this task in real-time, using a camera. However, due to the complexity of the work, your team has not found rule-based systems to be capable of adding value to the review of images.
Would AI add value to a potential solution? Why or why not?
Step3: 3) Flagging suspicious activity
A bank is using AI to flag suspicious international money transfers for potential money laundering, anti-terrorist financing or sanctions concerns. Though the system has proven more effective than the bank’s current processes, it still frequently flags legitimate transactions for review.
What are some potential harms that the system could cause, and how can the bank reduce the impacts of these potential harms?
Step4: 4) Prototyping a chatbot
During an ongoing pandemic outbreak, a country’s public health agency is facing a large volume of phone calls and e-mails from people looking for health information. The agency has determined that an AI-powered interactive chatbot that answers pandemic-related questions would help people get the specific information they want quickly, while reducing the burden on the agency’s employees. How should the agency start prototyping the chatbot?
- Build out the AI solution to the best of its ability before testing it with a diverse group of potential users.
- Build a non-AI prototype quickly and start testing it with a diverse group of potential users.
Step5: 5) Detecting misinformation
A social media platform is planning to deploy a new AI system to flag and remove social media messages containing misinformation. Though the system has proven effective in tests, it sometimes flags non-objectionable content as misinformation.
What are some ways in which the social media platform could allow someone whose message has been flagged to contest the misinformation designation?
Step6: 6) Improving autonomous vehicles
What are some of the ways to improve the safety of autonomous vehicles? (You might pick more than one option.)
- Incorporate the safety features of regular vehicles.
- Test the system in a variety of environments.
- Hire an internal ‘red team’ to play the role of bad actors seeking to manipulate the autonomous driving system. Strengthen the system against the team’s attacks on an ongoing basis. | Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.ethics.ex2 import *
print("Setup Complete")
Explanation: In the tutorial, you learned about human-centered design (HCD) and became familiar with six general steps to apply it to AI systems. In this exercise, you will identify and address design issues in six interesting AI use cases.
Introduction
Begin by running the next code cell.
- Clicking inside the code cell.
- Click on the triangle (in the shape of a "Play button") that appears to the left of the code cell.
- If your code is run successfully, you will see Setup Complete as output below the cell.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_1.check()
Explanation: 1) Reducing plastic waste
A Cambodian organization wants to help reduce the significant amounts of plastic waste that pollute the Mekong River System. Which of the following would be an appropriate way to start? (Your answer might use more than one option.)
Watch the people currently addressing the problem as they navigate existing tools and processes.
Conduct individual interviews with the people currently addressing the problem.
Assemble focus groups that consist of people currently addressing the problem.
After you have answered the question, view the official solution by running the code cell below.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_2.check()
Explanation: 2) Detecting breast cancer
Pathologists try to detect breast cancer by examining cells on tissue slides under microscopes. This tiring and repetitive work requires an expert eye. Your team wants to create a technology solution that helps pathologists with this task in real-time, using a camera. However, due to the complexity of the work, your team has not found rule-based systems to be capable of adding value to the review of images.
Would AI add value to a potential solution? Why or why not?
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_3.check()
Explanation: 3) Flagging suspicious activity
A bank is using AI to flag suspicious international money transfers for potential money laundering, anti-terrorist financing or sanctions concerns. Though the system has proven more effective than the bank’s current processes, it still frequently flags legitimate transactions for review.
What are some potential harms that the system could cause, and how can the bank reduce the impacts of these potential harms?
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_4.check()
Explanation: 4) Prototyping a chatbot
During an ongoing pandemic outbreak, a country’s public health agency is facing a large volume of phone calls and e-mails from people looking for health information. The agency has determined that an AI-powered interactive chatbot that answers pandemic-related questions would help people get the specific information they want quickly, while reducing the burden on the agency’s employees. How should the agency start prototyping the chatbot?
- Build out the AI solution to the best of its ability before testing it with a diverse group of potential users.
- Build a non-AI prototype quickly and start testing it with a diverse group of potential users.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_5.check()
Explanation: 5) Detecting misinformation
A social media platform is planning to deploy a new AI system to flag and remove social media messages containing misinformation. Though the system has proven effective in tests, it sometimes flags non-objectionable content as misinformation.
What are some ways in which the social media platform could allow someone whose message has been flagged to contest the misinformation designation?
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_6.check()
Explanation: 6) Improving autonomous vehicles
What are some of the ways to improve the safety of autonomous vehicles? (You might pick more than one option.)
- Incorporate the safety features of regular vehicles.
- Test the system in a variety of environments.
- Hire an internal ‘red team’ to play the role of bad actors seeking to manipulate the autonomous driving system. Strengthen the system against the team’s attacks on an ongoing basis.
End of explanation |
753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial On Simple Linear Model
Introduction
This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow.
Imports
Step1: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step2: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step3: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step4: One-Hot Encoding
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are
Step5: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
Step6: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
Step7: Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
Step8: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step9: Plot a few images to see if data is correct
Step10: TensorFlow Graph
The entire purpose of TensorFlow is to have a computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
Step13: Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called weights and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.
Step14: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
Step15: Model
This simple mathematical model multiplies the images in the placeholder variable x with the weights and then adds the biases.
The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix.
Note that the name logits is typical TensorFlow terminology, but other people may call the variable something else.
Step16: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the logits matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in y_pred.
Step17: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
Step18: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for weights and biases. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the logits because it also calculates the softmax internally.
Step19: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step20: Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step21: Performance measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
Step22: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step23: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step24: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
Step25: Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
Step26: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
Step27: Helper-functions to show performance
Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
Step28: Function for printing the classification accuracy on the test-set.
Step29: Function for printing and plotting the confusion matrix using scikit-learn.
Step30: Function for plotting examples of images from the test-set that have been mis-classified.
Step31: Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
Step32: Performance before any optimization
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
Step33: Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
Step34: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
Step35: Performance after 10 optimization iterations
Step36: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
Step37: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
Step38: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
Step39: We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
Explanation: TensorFlow Tutorial On Simple Linear Model
Introduction
This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow.
Imports
End of explanation
tf.__version__
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.labels[0:5, :]
Explanation: One-Hot Encoding
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are:
End of explanation
data.test.cls = np.array([label.argmax() for label in data.test.labels])
Explanation: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
End of explanation
data.test.cls[0:5]
Explanation: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
End of explanation
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
x = tf.placeholder(tf.float32, [None, img_size_flat])
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used to change the input to the graph.
Model variables that are going to be optimized so as to make the model perform better.
The model which is essentially just a mathematical function that calculates some output given the input in the placeholder variables and using the model variables.
A cost measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables of the model.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard. We will cover this in upcoming tutorials.
Placeholder variables
Placeholder variables serve as the input to the graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
y_true = tf.placeholder(tf.float32, [None, num_classes])
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.placeholder(tf.int64, [None])
Explanation: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
End of explanation
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
Explanation: Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called weights and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.
End of explanation
biases = tf.Variable(tf.zeros([num_classes]))
Explanation: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
End of explanation
logits = tf.matmul(x, weights) + biases
Explanation: Model
This simple mathematical model multiplies the images in the placeholder variable x with the weights and then adds the biases.
The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix.
Note that the name logits is typical TensorFlow terminology, but other people may call the variable something else.
End of explanation
y_pred = tf.nn.softmax(logits)
Explanation: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the logits matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in y_pred.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
End of explanation
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=y_true)
Explanation: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for weights and biases. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the logits because it also calculates the softmax internally.
End of explanation
cost = tf.reduce_mean(cross_entropy)
Explanation: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
End of explanation
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)
Explanation: Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Performance measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
session.run(tf.global_variables_initializer())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
End of explanation
batch_size = 100
Explanation: Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
End of explanation
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
Explanation: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
End of explanation
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
Explanation: Helper-functions to show performance
Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
End of explanation
def print_accuracy():
# Use TensorFlow to compute the accuracy.
acc = session.run(accuracy, feed_dict=feed_dict_test)
# Print the accuracy.
print("Accuracy on test-set: {0:.1%}".format(acc))
Explanation: Function for printing the classification accuracy on the test-set.
End of explanation
def print_confusion_matrix():
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the predicted classifications for the test-set.
cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test)
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
# Make various adjustments to the plot.
plt.tight_layout()
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
Explanation: Function for printing and plotting the confusion matrix using scikit-learn.
End of explanation
def plot_example_errors():
# Use TensorFlow to get a list of boolean values
# whether each test-image has been correctly classified,
# and a list for the predicted class of each image.
correct, cls_pred = session.run([correct_prediction, y_pred_cls],
feed_dict=feed_dict_test)
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_weights():
# Get the values for the weights from the TensorFlow variable.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Create figure with 3x4 sub-plots,
# where the last 2 sub-plots are unused.
fig, axes = plt.subplots(3, 4)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Only use the weights for the first 10 sub-plots.
if i<10:
# Get the weights for the i'th digit and reshape it.
# Note that w.shape == (img_size_flat, 10)
image = w[:, i].reshape(img_shape)
# Set the label for the sub-plot.
ax.set_xlabel("Weights: {0}".format(i))
# Plot the image.
ax.imshow(image, vmin=w_min, vmax=w_max, cmap='seismic')
# Remove ticks from each sub-plot.
ax.set_xticks([])
ax.set_yticks([])
Explanation: Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
End of explanation
print_accuracy()
plot_example_errors()
Explanation: Performance before any optimization
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
End of explanation
optimize(num_iterations=1)
print_accuracy()
plot_example_errors()
Explanation: Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
End of explanation
plot_weights()
Explanation: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
End of explanation
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_accuracy()
plot_example_errors()
plot_weights()
Explanation: Performance after 10 optimization iterations
End of explanation
# We have already performed 10 iterations.
optimize(num_iterations=990)
print_accuracy()
plot_example_errors()
Explanation: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
End of explanation
plot_weights()
Explanation: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
End of explanation
print_confusion_matrix()
Explanation: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions
Step1: 2 - Overview of the Problem set
Problem Statement
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
What you need to remember
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
Explanation: Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions:
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
You will learn to:
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- matplotlib is a famous library to plot graphs in Python.
- PIL and scipy are used here to test your model with your own picture at the end.
End of explanation
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
Explanation: 2 - Overview of the Problem set
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
End of explanation
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_y.shape[1]
m_test = test_set_y.shape[1]
num_px = train_set_x_orig[0].shape[0]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise: Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].
End of explanation
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
Explanation: Expected Output for m_train, m_test and num_px:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use:
python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
End of explanation
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
Explanation: Expected Output:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
Explanation: <font color='blue'>
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
3 - General Architecture of the learning algorithm
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network!
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
Mathematical expression of the algorithm:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
Key steps:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call model().
4.1 - Helper functions
Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
End of explanation
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim,1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
Explanation: Expected Output:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
4.2 - Initializing parameters
Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
End of explanation
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T,X)+b) # compute activation
cost = -1/m*(np.dot(Y, np.log(A.T))+np.dot(1-Y, np.log((1-A.T)))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1/m*np.dot(X, (A-Y).T)
db = 1/m*np.sum(A-Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
Exercise: Implement a function propagate() that computes the cost function and its gradient.
Hints:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
End of explanation
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate*dw
b = b - learning_rate*db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99845601]
[ 2.39507239]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.00145557813678 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 5.801545319394553 </td>
</tr>
</table>
d) Optimization
You have initialized your parameters.
You are also able to compute a cost function and its gradient.
Now, you want to update the parameters using gradient descent.
Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
End of explanation
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T,X)+b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0,i]>=0.5:
Y_prediction[0,i]=1
else:
Y_prediction[0,i]=0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.19033591]
[ 0.12259159]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.92535983008 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.67752042]
[ 1.41625495]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.219194504541 </td>
</tr>
</table>
Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions:
Calculate $\hat{Y} = A = \sigma(w^T X + b)$
Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
Explanation: Expected Output:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1. 0.]]
</td>
</tr>
</table>
<font color='blue'>
What to remember:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
5 - Merge all functions into a model
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
Exercise: Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
End of explanation
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
Explanation: Run the following cell to train your model.
End of explanation
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **Cost after iteration 0 ** </td>
<td> 0.693147 </td>
</tr>
<tr>
<td> <center> $\vdots$ </center> </td>
<td> <center> $\vdots$ </center> </td>
</tr>
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
End of explanation
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
Explanation: Let's also plot the cost function and the gradients.
End of explanation
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
Explanation: Interpretation:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
6 - Further analysis (optional/ungraded exercise)
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
Choice of learning rate
Reminder:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens.
End of explanation
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: Interpretation:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
7 - Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation |
755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build Experiment from keras model
Embeds a 3 layer FCN model to predict MNIST handwritten digits in a Tensorflow Experiment. The Estimator here is a Keras model.
DOES NOT WORK CURRENTLY with Tensorflow 1.2
Step1: Prepare Data
Step2: The train_input_fn and test_input_fn below are equivalent to using the full batches. There is some information on building batch oriented input functions, but I was unable to make it work. Commented out block is adapted from a Keras data generator, but that does not work either.
Step3: Define Estimator
Apparently Keras model is an Estimator (or there is some functional equivalence).
Step4: Train Estimator
Using the parameters x, y and batch are deprecated and the warnings say to use the input_fn instead. However, using that results in very slow fit and evaluate. The solution is to use batch oriented input_fns. The commented portions will be opened up once I figure out how to make the batch oriented input_fns work.
Step5: Evaluate Estimator
Step6: alternatively...
Define Experiment
A model is wrapped in an Estimator, which is then wrapped in an Experiment. Once you have an Experiment, you can run this in a distributed manner on CPU or GPU.
Step7: Run Experiment | Python Code:
from __future__ import division, print_function
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib
from tensorflow.contrib import keras
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import tensorflow as tf
DATA_DIR = "../../data"
TRAIN_FILE = os.path.join(DATA_DIR, "mnist_train.csv")
TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv")
MODEL_DIR = os.path.join(DATA_DIR, "expt-keras-model")
NUM_FEATURES = 784
NUM_CLASSES = 10
NUM_STEPS = 10
LEARNING_RATE = 1e-3
BATCH_SIZE = 128
tf.logging.set_verbosity(tf.logging.INFO)
Explanation: Build Experiment from keras model
Embeds a 3 layer FCN model to predict MNIST handwritten digits in a Tensorflow Experiment. The Estimator here is a Keras model.
DOES NOT WORK CURRENTLY with Tensorflow 1.2
End of explanation
def parse_file(filename):
xdata, ydata = [], []
fin = open(filename, "rb")
i = 0
for line in fin:
if i % 10000 == 0:
print("{:s}: {:d} lines read".format(
os.path.basename(filename), i))
cols = line.strip().split(",")
onehot_label = np.zeros((NUM_CLASSES))
onehot_label[int(cols[0])] = 1
ydata.append(onehot_label)
xdata.append([float(x) / 255. for x in cols[1:]])
i += 1
fin.close()
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
y = np.array(ydata, dtype=np.float32)
X = np.array(xdata, dtype=np.float32)
return X, y
Xtrain, ytrain = parse_file(TRAIN_FILE)
Xtest, ytest = parse_file(TEST_FILE)
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
Explanation: Prepare Data
End of explanation
def train_input_fn():
return tf.constant(Xtrain), tf.constant(ytrain)
def test_input_fn():
return tf.constant(Xtest), tf.constant(ytest)
# def batch_input_fn(X, y, batch_size=BATCH_SIZE,
# num_epochs=NUM_STEPS):
# for e in range(num_epochs):
# num_recs = X.shape[0]
# sids = np.random.permutation(np.arange(num_recs))
# num_batches = num_recs // batch_size
# for bid in range(num_batches):
# sids_b = sids[bid * batch_size : (bid + 1) * batch_size]
# X_b = np.zeros((batch_size, NUM_FEATURES))
# y_b = np.zeros((batch_size,))
# for i in range(batch_size):
# X_b[i] = X[sids_b[i]]
# y_b[i] = y[sids_b[i]]
# yield tf.constant(X_b, dtype=tf.float32), \
# tf.constant(y_b, dtype=tf.float32)
# def train_input_fn():
# return batch_input_fn(Xtrain, ytrain, BATCH_SIZE).next()
# def test_input_fn():
# return batch_input_fn(Xtest, ytest, BATCH_SIZE).next()
Explanation: The train_input_fn and test_input_fn below are equivalent to using the full batches. There is some information on building batch oriented input functions, but I was unable to make it work. Commented out block is adapted from a Keras data generator, but that does not work either.
End of explanation
model = keras.models.Sequential()
model.add(keras.layers.Dense(512, activation="relu",
input_shape=(NUM_FEATURES,)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(256, activation="relu"))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(NUM_CLASSES, activation="softmax"))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=["accuracy"])
Explanation: Define Estimator
Apparently Keras model is an Estimator (or there is some functional equivalence).
End of explanation
model.fit(x=Xtrain, y=ytrain,
batch_size=BATCH_SIZE,
epochs=NUM_STEPS)
# estimator.fit(input_fn=train_input_fn, steps=NUM_STEPS)
Explanation: Train Estimator
Using the parameters x, y and batch are deprecated and the warnings say to use the input_fn instead. However, using that results in very slow fit and evaluate. The solution is to use batch oriented input_fns. The commented portions will be opened up once I figure out how to make the batch oriented input_fns work.
End of explanation
results = model.evaluate(x=Xtest, y=ytest)
# results = estimator.evaluate(input_fn=test_input_fn)
print(results)
Explanation: Evaluate Estimator
End of explanation
NUM_STEPS = 20
def experiment_fn(run_config, params):
# define and compile model
model = keras.models.Sequential()
model.add(keras.layers.Dense(512, activation="relu",
input_shape=(NUM_FEATURES,)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(256, activation="relu"))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(NUM_CLASSES, activation="softmax"))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=["accuracy"])
estimator = model.get_estimator(config={})
return tf.contrib.learn.Experiment(
estimator=estimator,
train_input_fn=train_input_fn,
train_steps=NUM_STEPS,
eval_input_fn=test_input_fn)
Explanation: alternatively...
Define Experiment
A model is wrapped in an Estimator, which is then wrapped in an Experiment. Once you have an Experiment, you can run this in a distributed manner on CPU or GPU.
End of explanation
shutil.rmtree(MODEL_DIR, ignore_errors=True)
tf.contrib.learn.learn_runner.run(experiment_fn,
run_config=tf.contrib.learn.RunConfig(
model_dir=MODEL_DIR))
Explanation: Run Experiment
End of explanation |
756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Predict Shakespeare with Cloud TPUs and TPUEstimator
Overview
This example uses TPUEstimator to build a language model and train it on a Cloud TPU. This language model predicts the next character of text given the text so far. The trained model can generate new snippets of text that read in a similar style to the text training data.
The model trains for 2000 steps and completes in approximately 5 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
Learning objectives
In this Colab, you will learn how to
Step3: Training data
You can use a tf.data pipeline to feed input data to your Estimator. The goal for this exercise is to have the model predict the next character, so you need to feed sequences from a supplied dataset where the source is offset from the target by one character.
Note that the model uses tf.contrib.data.enumerate_dataset() and tf.contrib.stateless.stateless_random_uniform to generate deterministic uniform samples. This, combined with the setting of RunConfig.tf_random_seed guarantees that every run of the model will have the same behavior.
Step4: Build the model
Now that you have some data, you can define your model. This example uses a simple 3 layer, forward Long Short-Term Memory (LSTM) language model.
The difference between this model and a CPU/GPU model is that you must specify a static shape for the model's input. This allows TensorFlow to infer the shape of the model and to satisfy the XLA compiler's static shape requirement.
Step5: Train the model
Since this example uses TPUEstimator, you must provide a model function to train the model. The model function specifies how to train, evaluate and run inference (predictions) on your model.
Each part of the model function is covered in turn below. The first part is the training step.
Feed your source tensor to your LSTM model.
Compute the cross entropy loss to train it to better predict the target tensor.
Use the RMSPropOptimizer to optimize your network.
Wrap it with the CrossShardOptimizer which lets you use multiple TPU cores to train.
Finally, return a TPUEstimatorSpec to indicate how TPUEstimator should train your model.
Step6: Evaluate the model
The evaluation step is simpler
Step8: Compute predictions
The following step is not TPU-specific. It uses the input tensor as a seed for the model, then uses a TensorFlow loop to sample characters from the model and return a result.
Step9: Build the model function
To build the model function that TPUEstimator expects, combine the helper functions as follows
Step10: Run the model
Use the following boilerplate to specify a TPU worker, then you are ready to train your model.
Step11: Run predictions with the model
Now that your model is trained, you can run predictions through it to generate faux-Shakespeare. Use the seed sentence to get your model started, then sample 500 characters from it. | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: <a href="https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpuestimator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# !rm /content/adc.json
import json
import os
import pprint
import re
import time
import tensorflow as tf
use_tpu = True #@param {type:"boolean"}
bucket = '' #@param {type:"string"}
assert bucket, 'Must specify an existing GCS bucket name'
print('Using bucket: {}'.format(bucket))
if use_tpu:
assert 'COLAB_TPU_ADDR' in os.environ, 'Missing TPU; did you request a TPU in Notebook Settings?'
MODEL_DIR = 'gs://{}/{}'.format(bucket, time.strftime('tpuestimator-lstm/%Y-%m-%d-%H-%M-%S'))
print('Using model dir: {}'.format(MODEL_DIR))
from google.colab import auth
auth.authenticate_user()
if 'COLAB_TPU_ADDR' in os.environ:
TF_MASTER = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])
# Upload credentials to TPU.
with tf.Session(TF_MASTER) as sess:
with open('/content/adc.json', 'r') as f:
auth_info = json.load(f)
tf.contrib.cloud.configure_gcs(sess, credentials=auth_info)
# Now credentials are set for all future sessions on this TPU.
else:
TF_MASTER=''
with tf.Session(TF_MASTER) as session:
pprint.pprint(session.list_devices())
Explanation: Predict Shakespeare with Cloud TPUs and TPUEstimator
Overview
This example uses TPUEstimator to build a language model and train it on a Cloud TPU. This language model predicts the next character of text given the text so far. The trained model can generate new snippets of text that read in a similar style to the text training data.
The model trains for 2000 steps and completes in approximately 5 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
Learning objectives
In this Colab, you will learn how to:
* Build a simple 3 layer, forward Long Short-Term Memory (LSTM) language model.
* Provide a model function to train the model for TPUEstimator.
* Run the model forward and see how well it predicts the next character.
Instructions
<h3> Train on TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
Create a Cloud Storage bucket for your TensorBoard logs at http://console.cloud.google.com/storage.
On the main menu, click Runtime and select Change runtime type. Set "TPU" as the hardware accelerator.
Click Runtime again and select Runtime > Run All (Watch out: the initial authentication step for this notebook requires that you click on use_tpu and supply a bucket name as input). You can also run the cells manually with Shift-ENTER.
TPUs are located in Google Cloud, for optimal performance, they read data directly from Google Cloud Storage (GCS)
Data, model, and training
For this exercise, you train the network using the combined works of Shakespeare to create a play-generating robot.
The network outputs something Shakespeare-esque:
<blockquote>
Loves that led me no dumbs lack her Berjoy's face with her to-day.
The spirits roar'd; which shames which within his powers
Which tied up remedies lending with occasion,
A loud and Lancaster, stabb'd in me
Upon my sword for ever: 'Agripo'er, his days let me free.
Stop it of that word, be so: at Lear,
When I did profess the hour-stranger for my life,
When I did sink to be cried how for aught;
Some beds which seeks chaste senses prove burning;
But he perforces seen in her eyes so fast;
And _
</blockquote>
To generate your own faux-Shakespeare, you begin with a data generator. The training data for the model is snippets from a text file; the target snippet is offset by one character.
Authentication
End of explanation
import numpy as np
!wget --show-progress --continue -O /content/shakespeare.txt http://www.gutenberg.org/files/100/100-0.txt
SHAKESPEARE_TXT = '/content/shakespeare.txt'
RANDOM_SEED = 42 # An arbitrary choice.
def transform(txt):
return np.asarray([ord(c) for c in txt], dtype=np.int32)
def input_fn(params):
Return a dataset of source and target sequences for training.
batch_size = params['batch_size']
print('Batch size: {}'.format(batch_size))
seq_len = params['seq_len']
with tf.gfile.GFile(params['source_file'], 'r') as f:
txt = f.read()
txt = ''.join([x for x in txt if ord(x) < 128])
tf.logging.info('Sample text: %s', txt[10000:10100])
source = tf.constant(transform(txt), dtype=tf.int32)
ds = tf.data.Dataset.from_tensors(source)
ds = ds.repeat()
ds = ds.apply(tf.contrib.data.enumerate_dataset())
def _select_seq(offset, src):
idx = tf.contrib.stateless.stateless_random_uniform(
[1], seed=[RANDOM_SEED, offset], dtype=tf.float32)[0]
max_start_offset = len(txt) - seq_len
idx = tf.cast(idx * max_start_offset, tf.int32)
print(idx)
return {
'source': tf.reshape(src[idx:idx + seq_len], [seq_len]),
'target': tf.reshape(src[idx + 1:idx + seq_len + 1], [seq_len])
}
ds = ds.map(_select_seq)
ds = ds.batch(batch_size, drop_remainder=True)
ds = ds.prefetch(2)
return ds
tf.reset_default_graph()
tf.set_random_seed(0)
with tf.Session() as session:
ds = input_fn({'batch_size': 1, 'seq_len': 10, 'source_file': SHAKESPEARE_TXT})
features = session.run(ds.make_one_shot_iterator().get_next())
print(features['source'])
print(features['target'])
Explanation: Training data
You can use a tf.data pipeline to feed input data to your Estimator. The goal for this exercise is to have the model predict the next character, so you need to feed sequences from a supplied dataset where the source is offset from the target by one character.
Note that the model uses tf.contrib.data.enumerate_dataset() and tf.contrib.stateless.stateless_random_uniform to generate deterministic uniform samples. This, combined with the setting of RunConfig.tf_random_seed guarantees that every run of the model will have the same behavior.
End of explanation
EMBEDDING_DIM = 1024
# Construct a 2-layer LSTM
def _lstm(inputs, batch_size, initial_state=None):
def _make_cell(layer_idx):
with tf.variable_scope('lstm/%d' % layer_idx,):
return tf.nn.rnn_cell.LSTMCell(
num_units=EMBEDDING_DIM,
state_is_tuple=True,
reuse=tf.AUTO_REUSE,
)
cell = tf.nn.rnn_cell.MultiRNNCell([
_make_cell(0),
_make_cell(1),
])
if initial_state is None:
initial_state = cell.zero_state(batch_size, tf.float32)
outputs, final_state = tf.contrib.recurrent.functional_rnn(
cell, inputs, initial_state=initial_state, use_tpu=use_tpu)
return outputs, final_state
def lstm_model(seq, initial_state=None):
with tf.variable_scope('lstm',
initializer=tf.orthogonal_initializer,
reuse=tf.AUTO_REUSE):
batch_size = seq.shape[0]
seq_len = seq.shape[1]
embedding_params = tf.get_variable(
'char_embedding',
initializer=tf.orthogonal_initializer(seed=0),
shape=(256, EMBEDDING_DIM), dtype=tf.float32)
embedding = tf.nn.embedding_lookup(embedding_params, seq)
lstm_output, lstm_state = _lstm(
embedding, batch_size, initial_state=initial_state)
# Apply a single dense layer to the output of our LSTM to predict
# our final characters. This looks awkward as we have to flatten
# our input to 2 dimensions before applying the dense layer.
flattened = tf.reshape(lstm_output, [-1, EMBEDDING_DIM])
logits = tf.layers.dense(flattened, 256, name='logits',)
logits = tf.reshape(logits, [-1, seq_len, 256])
return logits, lstm_state
Explanation: Build the model
Now that you have some data, you can define your model. This example uses a simple 3 layer, forward Long Short-Term Memory (LSTM) language model.
The difference between this model and a CPU/GPU model is that you must specify a static shape for the model's input. This allows TensorFlow to infer the shape of the model and to satisfy the XLA compiler's static shape requirement.
End of explanation
def train_fn(source, target):
logits, lstm_state = lstm_model(source)
batch_size = source.shape[0]
loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=target, logits=logits))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
if TF_MASTER:
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
train_op = optimizer.minimize(loss, tf.train.get_global_step())
return tf.contrib.tpu.TPUEstimatorSpec(
mode=tf.estimator.ModeKeys.TRAIN,
loss=loss,
train_op=train_op,
)
Explanation: Train the model
Since this example uses TPUEstimator, you must provide a model function to train the model. The model function specifies how to train, evaluate and run inference (predictions) on your model.
Each part of the model function is covered in turn below. The first part is the training step.
Feed your source tensor to your LSTM model.
Compute the cross entropy loss to train it to better predict the target tensor.
Use the RMSPropOptimizer to optimize your network.
Wrap it with the CrossShardOptimizer which lets you use multiple TPU cores to train.
Finally, return a TPUEstimatorSpec to indicate how TPUEstimator should train your model.
End of explanation
def eval_fn(source, target):
logits, _ = lstm_model(source)
def metric_fn(labels, logits):
labels = tf.cast(labels, tf.int64)
return {
'recall@1': tf.metrics.recall_at_k(labels, logits, 1),
'recall@5': tf.metrics.recall_at_k(labels, logits, 5)
}
eval_metrics = (metric_fn, [target, logits])
return tf.contrib.tpu.TPUEstimatorSpec(
mode=tf.estimator.ModeKeys.EVAL,
loss=loss,
eval_metrics=eval_metrics)
Explanation: Evaluate the model
The evaluation step is simpler: you run the model forward and check how well it predicts the next character. Returning a TPUEstimatorSpec in this section tells TPUEstimator how to evaluate the model.
End of explanation
def predict_fn(source):
# Seed the model with our initial array
batch_size = source.shape[0]
logits, lstm_state = lstm_model(source)
def _body(i, state, preds):
Body of our prediction loop: predict the next character.
cur_preds = preds.read(i)
next_logits, next_state = lstm_model(
tf.cast(tf.expand_dims(cur_preds, -1), tf.int32), state)
# pull out the last (and only) prediction.
next_logits = next_logits[:, -1]
next_pred = tf.multinomial(
next_logits, num_samples=1, output_dtype=tf.int32)[:, 0]
preds = preds.write(i + 1, next_pred)
return (i + 1, next_state, preds)
def _cond(i, state, preds):
del state
del preds
# Loop until `predict_len - 1`: preds[0] is the initial state and we
# write to `i + 1` on each iteration.
return tf.less(i, predict_len - 1)
next_pred = tf.multinomial(
logits[:, -1], num_samples=1, output_dtype=tf.int32)[:, 0]
i = tf.constant(0, dtype=tf.int32)
predict_len = 500
# compute predictions as [seq_len, batch_size] to simplify indexing/updates
pred_var = tf.TensorArray(
dtype=tf.int32,
size=predict_len,
dynamic_size=False,
clear_after_read=False,
element_shape=(batch_size,),
name='prediction_accumulator',
)
pred_var = pred_var.write(0, next_pred)
_, _, final_predictions = tf.while_loop(_cond, _body,
[i, lstm_state, pred_var])
# reshape back to [batch_size, predict_len] and cast to int32
final_predictions = final_predictions.stack()
final_predictions = tf.transpose(final_predictions, [1, 0])
final_predictions = tf.reshape(final_predictions, (batch_size, predict_len))
return tf.contrib.tpu.TPUEstimatorSpec(
mode=tf.estimator.ModeKeys.PREDICT,
predictions={'predictions': final_predictions})
Explanation: Compute predictions
The following step is not TPU-specific. It uses the input tensor as a seed for the model, then uses a TensorFlow loop to sample characters from the model and return a result.
End of explanation
def model_fn(features, labels, mode, params):
if mode == tf.estimator.ModeKeys.TRAIN:
return train_fn(features['source'], features['target'])
if mode == tf.estimator.ModeKeys.EVAL:
return eval_fn(features['source'], features['target'])
if mode == tf.estimator.ModeKeys.PREDICT:
return predict_fn(features['source'])
Explanation: Build the model function
To build the model function that TPUEstimator expects, combine the helper functions as follows:
End of explanation
def _make_estimator(num_shards, use_tpu=True):
config = tf.contrib.tpu.RunConfig(
tf_random_seed=RANDOM_SEED,
master=TF_MASTER,
model_dir=MODEL_DIR,
save_checkpoints_steps=5000,
tpu_config=tf.contrib.tpu.TPUConfig(
num_shards=num_shards, iterations_per_loop=100))
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=use_tpu,
model_fn=model_fn, config=config,
train_batch_size=1024,
eval_batch_size=1024,
predict_batch_size=128,
params={'seq_len': 100, 'source_file': SHAKESPEARE_TXT},
)
return estimator
# Use all 8 cores for training
estimator = _make_estimator(num_shards=8, use_tpu=use_tpu)
estimator.train(
input_fn=input_fn,
max_steps=2000,
)
Explanation: Run the model
Use the following boilerplate to specify a TPU worker, then you are ready to train your model.
End of explanation
def _seed_input_fn(params):
del params
seed_txt = 'Looks it not like the king?'
seed = transform(seed_txt)
seed = tf.constant(seed.reshape([1, -1]), dtype=tf.int32)
# Predict must return a Dataset, not a Tensor.
return tf.data.Dataset.from_tensors({'source': seed})
# Use 1 core for prediction since we're only generating a single element batch
estimator = _make_estimator(num_shards=1, use_tpu=False)
idx = next(estimator.predict(input_fn=_seed_input_fn))['predictions']
print(''.join([chr(i) for i in idx]))
Explanation: Run predictions with the model
Now that your model is trained, you can run predictions through it to generate faux-Shakespeare. Use the seed sentence to get your model started, then sample 500 characters from it.
End of explanation |
757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. What country are most billionaires from? For the top ones, how many billionaires per billion people?
Step1: 2. What's the average wealth of a billionaire? Male? Female?
Step2: 3. Most common source of wealth? Male vs. female?
Step3: 4. List top ten billionaires
Step4: 5.Given the richest person in a country, what % of the GDP is their wealth?
Step5: 6.What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
Step6: 7.How many self made billionaires vs. others?
Step7: 8.How old are billionaires? How old are billionaires self made vs. non self made? or different industries?
Step8: 9.Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, so like pit the US vs India
Step9: Compare the total wealth of billionaires in US to the GDP of the country, so like pit the US vs India¶
Step10: 10. List top 10 poorest billionaires. Who is the poorest billionare ?
Step11: 11. List ten youngest billionaires, list ten oldest billionaires, and plot and age distribution graph
Step12: 11b. Plot an age distribution graph
Step13: 12. What is relationship to company? And what are the most common relationships?
Step14: 13.Maybe just made a graph about how wealthy they are in general?
Step15: 14.Maybe plot their net worth vs age (scatterplot)
Step16: 15.Make a bar graph of the top 10 or 20 richest | Python Code:
print("Most billionaires are from the following countries in descending order:")
df['countrycode'].value_counts().head(5)
us = 903 / 1000000000
ger = 160 / 1000000000
china = 153 / 1000000000
russia = 119 / 1000000000
japan = 96 / 1000000000
print("per billion for us is", us, "for germany is", ger, "for china is", china, "for russia is", russia, "for japan is", japan)
Explanation: 1. What country are most billionaires from? For the top ones, how many billionaires per billion people?
End of explanation
df['networthusbillion'].describe()
print("Average wealth of a billionaire is 3.531943")
male = df[df['gender'] == "male"]
male.head()
male['networthusbillion'].describe()
print("The average wealth of male billionaires is 3.516881")
female = df[df['gender'] == "female"]
female['networthusbillion'].describe()
print("The average wealth of female billionaires is 3.819277")
Explanation: 2. What's the average wealth of a billionaire? Male? Female?
End of explanation
print("Most common source of wealth are:")
df['sourceofwealth'].value_counts().head()
print("Most common source of wealth for male billionaires are:")
male['sourceofwealth'].value_counts().head()
print("Most common source of wealth for female billionaires are:")
female['sourceofwealth'].value_counts().head()
Explanation: 3. Most common source of wealth? Male vs. female?
End of explanation
bill = df.sort_values('networthusbillion', ascending=False).head(10)
df.sort_values('networthusbillion', ascending=False).head(10)
print("A precise list of billionaires, wealth and rank is given below:")
columns_want = bill[['name', 'rank', 'networthusbillion']]
columns_want
Explanation: 4. List top ten billionaires
End of explanation
us_gdp = 7419
wealth_rich = 76
percent = round((wealth_rich * 100) / us_gdp)
print(percent, "% of the US GDP is their wealth")
Explanation: 5.Given the richest person in a country, what % of the GDP is their wealth?
End of explanation
print("the most common industries for billionaires to come from are:")
df['industry'].value_counts()
columns_we_want = df[['name', 'networthusbillion', 'industry']]
columns_we_want
print("the total amount of billionaire money from each industry are given below:")
columns_we_want.groupby('industry').describe()
Explanation: 6.What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
End of explanation
#columnswant = df[['name', 'networthusbillion', 'selfmade']]
#columnswant
print("The number of selfmade billionaires are:")
df['selfmade'].value_counts()
Explanation: 7.How many self made billionaires vs. others?
End of explanation
columns_want = df[['name', 'age', 'selfmade']]
columns_want.head(10)
columns_want = df[['name', 'age', 'industry']]
columns_want.head(10)
columns_want.sort_values('age', ascending=False)
Explanation: 8.How old are billionaires? How old are billionaires self made vs. non self made? or different industries?
End of explanation
is_in_us = df[df['countrycode'] == "USA"]
is_in_us['networthusbillion'].describe()
print("The total wealth of billionaires in US is 903")
Explanation: 9.Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, so like pit the US vs India
End of explanation
gdp_india = 2066.90
us_bill_wealth = 903
percent = round((us_bill_wealth * 100) / gdp_india)
print(percent, "% of the India GDP is the wealth of US billionaires")
Explanation: Compare the total wealth of billionaires in US to the GDP of the country, so like pit the US vs India¶
End of explanation
df.sort_values('networthusbillion').head(10)
print("The poorest billionaire is")
df.sort_values('networthusbillion').head(1)
Explanation: 10. List top 10 poorest billionaires. Who is the poorest billionare ?
End of explanation
print("The ten youngest billionaires are: ")
df.sort_values('age').head(10)
print("The ten oldest billionaires are: ")
df.sort_values('age', ascending=False).head(10)
columns_want = df[['name', 'age', 'industry']]
columns_want.sort_values('age', ascending=False).head(10)
Explanation: 11. List ten youngest billionaires, list ten oldest billionaires, and plot and age distribution graph
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
df.plot(kind='scatter', x='age', y='networthusbillion')
Explanation: 11b. Plot an age distribution graph
End of explanation
print("The most common relationships are:")
df['relationshiptocompany'].value_counts().head()
print("Relationship to a company is describes the billionaire's relationship to the company primarily responsible for their wealth, such as founder, executive, relation, or shareholder")
Explanation: 12. What is relationship to company? And what are the most common relationships?
End of explanation
columnswant
sort_df = df.sort_values('networthusbillion')
sort_df.plot(kind='line', x='rank', y='networthusbillion')
df.plot(kind='bar', x='name', y='networthusbillion')
Explanation: 13.Maybe just made a graph about how wealthy they are in general?
End of explanation
df.plot(kind='scatter', x='age', y='networthusbillion')
Explanation: 14.Maybe plot their net worth vs age (scatterplot)
End of explanation
df['networthusbillion'].head(10).plot(kind='bar', x='name', y='networthusbillion')
Explanation: 15.Make a bar graph of the top 10 or 20 richest
End of explanation |
758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lending Club Default Rate Analysis
Step1: Columns Interested
loan_status -- Current status of the loan<br/>
loan_amnt -- The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.<br/>
int_rate -- interest rate of the loan <br/>
sub_grade -- LC assigned sub loan grade -- dummie (grade -- LC assigned loan grade<br/>-- dummie)<br/>
purpose -- A category provided by the borrower for the loan request. -- dummie<br/>
annual_inc -- The self-reported annual income provided by the borrower during registration.<br/>
emp_length -- Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. -- dummie<br/>
fico_range_low<br/>
fico_range_high
home_ownership -- The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are
Step2: 1. Data Understanding -- Selected Decriptive Analysis
Step3: 2. Data Munging
Functions that performs data mining tasks
1a. Create column “default” using “loan_status”
Valentin (edited by Kay)
Step4: 2a. Convert data type on certain columns and create dummies
Nehal
Step5: 3a. Check and remove outliers (methods
Step6: 4a. Remove or replace missing values of certain columns
Step7: 6. Save the cleaned data | Python Code:
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import RFE
from sklearn.svm import SVR
from sklearn.svm import LinearSVC
from sklearn.svm import LinearSVR
import seaborn as sns
import matplotlib.pylab as pl
%matplotlib inline
Explanation: Lending Club Default Rate Analysis
End of explanation
df_app_2015 = pd.read_csv('LoanStats3d_securev1.csv.zip', compression='zip',header=1, skiprows=[-2,-1],low_memory=False)
df_app_2015.head(3)
# Pre-select columns
df = df_app_2015.ix[:, ['loan_status','loan_amnt', 'int_rate', 'sub_grade',\
'purpose',\
'annual_inc', 'emp_length', 'home_ownership',\
'fico_range_low','fico_range_high',\
'num_actv_bc_tl', 'tot_cur_bal', 'mort_acc','num_actv_rev_tl',\
'pub_rec_bankruptcies','dti' ]]
Explanation: Columns Interested
loan_status -- Current status of the loan<br/>
loan_amnt -- The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.<br/>
int_rate -- interest rate of the loan <br/>
sub_grade -- LC assigned sub loan grade -- dummie (grade -- LC assigned loan grade<br/>-- dummie)<br/>
purpose -- A category provided by the borrower for the loan request. -- dummie<br/>
annual_inc -- The self-reported annual income provided by the borrower during registration.<br/>
emp_length -- Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. -- dummie<br/>
fico_range_low<br/>
fico_range_high
home_ownership -- The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER -- dummie<br/>
tot_cur_bal -- Total current balance of all accounts
num_actv_bc_tl -- number of active bank accounts (avg_cur_bal -- average current balance of all accounts )<br/>
mort_acc -- number of mortgage accounts<br/>
num_actv_rev_tl -- Number of currently active revolving trades<br/>
dti -- A ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income.
pub_rec_bankruptcies - Number of public record bankruptcies<br/>
2015 Lending Club Data
1. Approved Loans
End of explanation
## in Nehal and Kay's notebooks
Explanation: 1. Data Understanding -- Selected Decriptive Analysis
End of explanation
df_app_2015.tail(3)
df.head(3)
df.loan_status.unique()
df = df.dropna()
len(df)
#df.loan_status.fillna('none', inplace=True) ## there is no nan
df.loan_status.unique()
defaulters=['Default','Charged Off', 'Late (31-120 days)']
non_defaulters=['Fully Paid']
uncertain = ['Current','Late (16-30 days)','In Grace Period', 'none']
len(df[df.loan_status.isin(uncertain)].loan_status)
df.info()
## select instances of defaulters and non_defulters
df2 = df.copy()
df2['Target']= 2 ## uncertain
df2.loc[df2.loan_status.isin(defaulters),'Target'] = 0 ## defaulters
df2.loc[df2.loan_status.isin(non_defaulters),'Target'] = 1 ## paid -- (and to whom to issue the loan)
print('Value in Target value for non defaulters')
print(df2.loc[df2.loan_status.isin(non_defaulters)].Target.unique())
print(len(df2[df2['Target'] == 1]))
print('Value in Target value for defaulters')
print(df2.loc[df2.loan_status.isin(defaulters)].Target.unique())
print(len(df2[df2['Target'] == 0]))
print('Value in Target value for uncertained-- unlabeled ones to predict')
print(df2.loc[df2.loan_status.isin(uncertain)].Target.unique())
print(len(df2[df2['Target'] == 2]))
42302/94968
Explanation: 2. Data Munging
Functions that performs data mining tasks
1a. Create column “default” using “loan_status”
Valentin (edited by Kay)
End of explanation
# function to create dummies
def create_dummies(column_name,df):
temp=pd.get_dummies(df[column_name],prefix=column_name)
df=pd.concat([df,temp],axis=1)
return df
dummy_list=['emp_length','home_ownership','purpose','sub_grade']
for col in dummy_list:
df2=create_dummies(col,df2)
for col in dummy_list:
df2=df2.drop(col,1)
temp=df2['int_rate'].astype(str).str.replace('%', '').replace(' ','').astype(float)
df2=df2.drop('int_rate',1)
df2=pd.concat([df2,temp],axis=1)
df2=df2.drop('loan_status',1)
for col in df2.columns:
print((df2[col].dtype))
Explanation: 2a. Convert data type on certain columns and create dummies
Nehal
End of explanation
df2.shape
df2['loan_amnt'][sorted(np.random.randint(0, high=10, size=5))]
# Reference:
# http://stackoverflow.com/questions/22354094/pythonic-way-of-detecting-outliers-in-one-dimensional-observation-data
def main(df, col, thres):
outliers_all = []
ind = sorted(np.random.randint(0, high=len(df), size=5000)) # randomly pick instances from the dataframe
#select data from our dataframe
x = df[col][ind]
num = len(ind)
outliers = plot(x, col, num, thres) # append all the outliers in the list
pl.show()
return outliers
def mad_based_outlier(points, thresh):
if len(points.shape) == 1:
points = points[:,None]
median = np.median(points, axis=0)
diff = np.sum((points - median)**2, axis=-1)
diff = np.sqrt(diff)
med_abs_deviation = np.median(diff)
modified_z_score = 0.6745 * diff / med_abs_deviation
return modified_z_score > thresh
def plot(x, col, num, thres):
fig, ax = pl.subplots(nrows=1, figsize=(10, 3))
sns.distplot(x, ax=ax, rug=True, hist=False)
outliers = np.asarray(x[mad_based_outlier(x, thres)])
ax.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False)
fig.suptitle('MAD-based Outlier Tests with selected {} values'.format(col, num, size=20))
return outliers
### Find outliers
##
boundries = []
outliers_loan = main(df2, 'loan_amnt', thres=2.2)
boundries.append(outliers_loan.min())
## annual income
outliers_inc = main(df2, 'annual_inc', 8)
boundries.append(outliers_inc.min())
## For total current balance of bank accounts
outliers_bal = main(df2, 'tot_cur_bal', 8)
boundries.append(outliers_bal.min())
columns = ['loan_amnt', 'annual_inc', 'tot_cur_bal']
for col, bound in zip(columns, boundries):
print ('Lower bound of detected Outliers for {}: {}'.format(col, bound))
# Use the outlier boundry to "regularize" the dataframe
df2_r = df2[df2[col] <= bound]
Explanation: 3a. Check and remove outliers (methods: MAD)
End of explanation
# df2_r.info()
df2_r.shape
#### Fill NaN with "none"??? ####
#df_filled = df2.fillna(value='none')
#df_filled.head(3)
df2_r = df2_r.dropna()
print (len(df2_r))
Explanation: 4a. Remove or replace missing values of certain columns
End of explanation
# df2_r.to_csv('approved_loan_2015_clean.csv')
Explanation: 6. Save the cleaned data
End of explanation |
759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problema 3.7
El tensor de tensiones en el punto P esta definido como
Step1: Solución
Step2: Se calcula el vector normal de acuerdo con
Step3: con lo cual es posible calcular el vector de tracciones usando la formula de Cauchy
Step4: Se calcula ahora la componente normal y tangencial de acuerdo con | Python Code:
import numpy as np
from numpy import array, cross, dot , sqrt
from sympy import *
from IPython.display import Image,Latex
Image(filename='FIGURES/Ejer3_7.png',width=250)
Explanation: Problema 3.7
El tensor de tensiones en el punto P esta definido como:
$$\left[ {\begin{array}{*{20}{c}}
8&{ - 4}&1\
{ - 4}&3&{1/2}\
1&{1/2}&2
\end{array}} \right]$$
Calcular el vector de tracción en el punto P según la normal del plano ABC (ver figura) y descompongalo en su componente normal y tangencial.
End of explanation
sigma = array([[8.0, -4.0 , 1.0] , [-4.0 , 3.0, 0.5] , [1.0, 0.5, 2.0]])
print sigma
Explanation: Solución:
End of explanation
ra = array([3,0,0])
rb = array([0,2,0])
rc = array([0,0,5])
rac =rc-ra
rab =rb-ra
N = cross(rab,rac)
mag = sqrt(N.dot(N))
n = array([N[0]/mag, N[1]/mag, N[2]/mag])
print rab , rac , n
Explanation: Se calcula el vector normal de acuerdo con:
${{\vec r}_{AC}} = {{\vec r}_C} - {{\vec r}_A}$
${{\vec r}_{AB}} = {{\vec r}_B} - {{\vec r}_A}$
$$\hat n = \frac{{{{\vec r}{AB}} \times {{\vec r}{AC}}}}{{\left| {{{\vec r}{AB}} \times {{\vec r}{AC}}} \right|}}$$
End of explanation
t = dot(sigma,n)
print t
Explanation: con lo cual es posible calcular el vector de tracciones usando la formula de Cauchy:
$$\vec t = \left[ \sigma \right] \cdot \hat n$$
End of explanation
magt = dot(t,t)
signn = dot(t,n)
taus =sqrt(magt-signn*signn)
print signn , sqrt(magt) , taus
from IPython.core.display import HTML
def css_styling():
styles = open('./custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: Se calcula ahora la componente normal y tangencial de acuerdo con:
$${\sigma} = \vec t \cdot \hat n$$
$${\tau ^2} = \vec t \cdot \vec t - {\sigma ^2}$$
End of explanation |
760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Marmousi model
The Marmousi model developped by Versteeg, 1994 is perhaps the most known benchmark model used for seismic imaging and inversion. Let's first download it from a public repo.
Step1: For inversion, we often want a coaser grid. We must also pad the model for the absorbing boundary and create the vs and rho paramters.
Step2: We create an instance of SeisCL and setup the geometry.
Step3: The model is now ready, with sources just outside the CMPL region.
Step4: Let's compute the seismograph for source 50.
Step5: The figure showing the recorded data is finally created. | Python Code:
import os
from urllib.request import urlretrieve
import numpy as np
import matplotlib.pyplot as plt
from SeisCL import SeisCL
url = "http://sw3d.cz/software/marmousi/little.bin/velocity.h@"
if not os.path.isfile("velocity.h@"):
urlretrieve(url, filename="velocity.h@")
vel = np.fromfile("velocity.h@", dtype=np.float32)
vp = np.transpose(np.reshape(np.array(vel), [2301, 751]))
Explanation: The Marmousi model
The Marmousi model developped by Versteeg, 1994 is perhaps the most known benchmark model used for seismic imaging and inversion. Let's first download it from a public repo.
End of explanation
seis = SeisCL()
vp = vp[::4, ::4]
vp = np.pad(vp, ((seis.nab, seis.nab), (seis.nab, seis.nab)), mode="edge")
rho = vp * 0 + 2000
vs = vp * 0
model = {'vp':vp, 'vs':vs, 'rho':rho}
Explanation: For inversion, we often want a coaser grid. We must also pad the model for the absorbing boundary and create the vs and rho paramters.
End of explanation
seis.N = vp.shape
seis.ND = 2
seis.dh = 16
seis.dt = dt = 6 * seis.dh / (7 * np.sqrt(2) * np.max(vp)) * 0.85
seis.NT = int(3 / seis.dt)
seis.surface_acquisition_2d()
print(seis.N, vp.shape)
Explanation: We create an instance of SeisCL and setup the geometry.
End of explanation
_, ax = plt.subplots(1, 1, figsize = (18, 6))
seis.DrawDomain2D(vp, ax = ax, showabs=True, showsrcrec=True)
Explanation: The model is now ready, with sources just outside the CMPL region.
End of explanation
seis.set_forward([50], model, withgrad=False)
seis.execute()
data = seis.read_data()
Explanation: Let's compute the seismograph for source 50.
End of explanation
p = data[0]
xmin = np.min(seis.rec_pos_all[0, :])
xmax = np.max(seis.rec_pos_all[0, :])
clip=0.01;
vmin=np.min(p)*clip;
vmax=np.max(p)*clip;
fig, ax = plt.subplots()
im = ax.imshow(p,
interpolation='bilinear',
vmin=vmin,
vmax=vmax,
cmap=plt.get_cmap('Greys'),
aspect='auto',
origin='upper',
extent=[xmin,xmax, p.shape[0]*seis.dt*20,0]
)
fig.suptitle('Pressure', fontsize=20)
plt.xlabel('x (km)', fontsize=16)
plt.ylabel('Time (ms)', fontsize=14)
plt.show()
Explanation: The figure showing the recorded data is finally created.
End of explanation |
761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Exercise 1
Step2: Exercise 2
Step3: b. Checking for Normality
Use the jarque_bera function to conduct a Jarque-Bera test on $X$, $Y$, and $Z$ to determine whether their distributions are normal.
Step4: c. Instability of Estimates
Create a histogram of the sample distributions of $X$, $Y$, and $Z$ along with the best estimate/mean based on the sample.
Step5: Exercise 3
Step6: b. Out-of-Sample Instability
Plot the running sharpe ratio of all three window lengths, as well as their in-sample mean and standard deviation bars.
Step7: Exercise 4
Step8: b. Temperature in Palo Alto
Find the mean and standard deviation of Palo Alto weekly average temperature data for the year of 2015 stored in p15_df.
Step9: c. Predicting 2016 Temperatures
Use the means you found in parts a and b to attempt to predict 2016 temperature data for both cities. Do this by creating two histograms for the 2016 temperature data in b16_df and p16_df with a vertical line where the 2015 means were to represent your prediction. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from statsmodels.stats.stattools import jarque_bera
# Set a seed so we can play with the data without generating new random numbers every time
np.random.seed(321)
Explanation: Exercises: Instability of Parameter Estimates - Answer Key
Lecture Link
This exercise notebook refers to this lecture. Please use the lecture for explanations and sample code.
https://www.quantopian.com/lectures#Instability-of-Estimates
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
POPULATION_MU = 100
POPULATION_SIGMA = 25
sample_sizes = [5, 25, 100, 500]
#Your code goes here
for i in range(len(sample_sizes)):
sample = np.random.normal(POPULATION_MU, POPULATION_SIGMA, sample_sizes[i])
row = 'Mean',(i+1),':', np.mean(sample),'Std',(i+1),':',np.std(sample)
print ("{} {}{} {:<10f} {} {}{} {}").format(*row)
print "\nAs sample size increases, the mean and standard deviation approach those of the population. However, even at the 500 sample level the sample mean is not the same as the population mean."
Explanation: Exercise 1: Sample Size vs. Standard Deviation
Using the below normal distribution with mean 100 and standard deviation 50, find the means and standard deviations of samples of size 5, 25, 100, and 500.
End of explanation
X = [ 31., 6., 21., 32., 41., 4., 48., 38., 43., 36., 50., 20., 46., 33., 8., 27., 17., 44., 16., 39., 3., 37.,
35., 13., 49., 2., 18., 42., 22., 25., 15., 24., 11., 19., 5., 40., 12., 10., 1., 45., 26., 29., 7., 30.,
14., 23., 28., 0., 34., 9., 47.]
Y = [ 15., 41., 33., 29., 3., 28., 28., 8., 15., 22., 39., 38., 22., 10., 39., 40., 24., 15., 21., 25., 17., 33.,
40., 32., 42., 5., 39., 8., 15., 25., 37., 33., 14., 25., 1., 31., 45., 5., 6., 19., 13., 39., 18., 49.,
13., 38., 8., 25., 32., 40., 17.]
Z = [ 38., 23., 16., 35., 48., 18., 48., 38., 24., 27., 24., 35., 37., 28., 11., 12., 31., -1., 9., 19., 20., 0.,
23., 33., 34., 24., 14., 28., 12., 25., 53., 19., 42., 21., 15., 36., 47., 20., 26., 41., 33., 50., 26., 22.,
-1., 35., 10., 25., 23., 24., 6.]
#Your code goes here
print "Mean X: %.2f"% np.mean(X)
print "Mean Y: %.2f"% np.mean(Y)
print "Mean Z: %.2f"% np.mean(Z)
Explanation: Exercise 2: Instability of Predictions on Mean Alone
a. Finding Means
Find the means of the following three data sets $X$, $Y$, and $Z$.
End of explanation
#Your code goes here
Xp = jarque_bera(X)[1]
Yp = jarque_bera(Y)[1]
Zp = jarque_bera(Z)[1]
print Xp, Yp, Zp
if Xp < 0.05:
print 'The distribution of X is likely normal.'
else:
print 'The distribution of X is likely not normal.'
if Yp < 0.05:
print 'The distribution of Y is likely normal.'
else:
print 'The distribution of Y is likely not normal.'
if Zp < 0.05:
print 'The distribution of Z is likely normal.'
else:
print 'The distribution of Z is likely not normal.'
Explanation: b. Checking for Normality
Use the jarque_bera function to conduct a Jarque-Bera test on $X$, $Y$, and $Z$ to determine whether their distributions are normal.
End of explanation
#Your code goes here
plt.hist([X, Y, Z], normed=1, histtype='bar', stacked=False, alpha = 0.7);
plt.axvline(np.mean(X));
plt.axvline(np.mean(Y), c='r');
plt.axvline(np.mean(Z), c='g');
print "All three datasets have a similar mean, but have very different distributions. Mean alone is very non-informative about what is going on in data, and should not be used alone as an estimator."
Explanation: c. Instability of Estimates
Create a histogram of the sample distributions of $X$, $Y$, and $Z$ along with the best estimate/mean based on the sample.
End of explanation
def sharpe_ratio(asset, riskfree):
return np.mean(asset - riskfree)/np.std(asset - riskfree)
start = '2010-01-01'
end = '2015-01-01'
treasury_ret = get_pricing('BIL', fields='price', start_date=start, end_date=end).pct_change()[1:]
pricing = get_pricing('THO', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:]
#Your code goes here
for window in [50, 150, 300]:
running_sharpe = [sharpe_ratio(returns[i-window+10:i], treasury_ret[i-window+10:i]) for i in range(window-10, len(returns))]
mean_rs = np.mean(running_sharpe[:-200])
std_rs = np.std(running_sharpe[:-200])
row = 'Sharpe Mean',(window),':', mean_rs,'Std', window,':',std_rs
print ("{} {:>3}{} {:<11f} {:>5} {:>3}{} {}").format(*row)
print "As we increase the length of the window, the variability of the running sharpe ratio decreases."
Explanation: Exercise 3: Sharpe Ratio Window Adjustment
a. Effect on Variability
Just as in the lecture, find the mean and standard deviation of the running sharpe ratio for THO, this time testing for multiple window lengths: 300, 150, and 50. Restrict your mean and standard deviation calculation to pricing data up to 200 days away from the end.
End of explanation
#Your code goes here
for window in [50, 150, 300]:
running_sharpe = [sharpe_ratio(returns[i-window+10:i], treasury_ret[i-window+10:i]) for i in range(window-10, len(returns))]
mean_rs = np.mean(running_sharpe[:-200])
std_rs = np.std(running_sharpe[:-200])
_, ax2 = plt.subplots()
ax2.plot(range(window-10, len(returns)), running_sharpe)
ticks = ax2.get_xticks()
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]])
ax2.axhline(mean_rs)
ax2.axhline(mean_rs + std_rs, linestyle='--')
ax2.axhline(mean_rs - std_rs, linestyle='--')
ax2.axvline(len(returns) - 200, color='pink');
plt.title(window, fontsize = 20)
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio')
plt.legend(['Sharpe Ratio', 'Mean', '+/- 1 Standard Deviation'])
print "Despite the longer window Sharpe ratios having less variability, they are still unpredictable with repect to just the mean. But within the context of the standard deviation the mean has more predictive value, as we see that even in the out-of-sample periods the ratios of all window lengths stay mainly within 1 standard deviation of the mean."
Explanation: b. Out-of-Sample Instability
Plot the running sharpe ratio of all three window lengths, as well as their in-sample mean and standard deviation bars.
End of explanation
b15_df = pd.DataFrame([ 29., 22., 19., 17., 19., 19., 15., 16., 18., 25., 21.,
25., 29., 27., 36., 38., 40., 44., 49., 50., 58., 61.,
67., 69., 74., 72., 76., 81., 81., 80., 83., 82., 80.,
79., 79., 80., 74., 72., 68., 68., 65., 61., 57., 50.,
46., 42., 41., 35., 30., 27., 28., 28.],
columns = ['Weekly Avg Temp'],
index = pd.date_range('1/1/2012', periods=52, freq='W') )
#Your code goes here
b15_mean = np.mean(b15_df['Weekly Avg Temp'])
b15_std = np.std(b15_df['Weekly Avg Temp'])
print "Boston Weekly Temp Mean: ", b15_mean
print "Boston Weekly Temp Std: ", b15_std
Explanation: Exercise 4: Weather
a. Temperature in Boston
Find the mean and standard deviation of Boston weekly average temperature data for the year of 2015 stored in b15_df.
End of explanation
p15_df = pd.DataFrame([ 49., 53., 51., 47., 50., 46., 49., 51., 49., 45., 52.,
54., 54., 55., 55., 57., 56., 56., 57., 63., 63., 65.,
65., 69., 67., 70., 67., 67., 68., 68., 70., 72., 72.,
70., 72., 70., 66., 66., 68., 68., 65., 66., 62., 61.,
63., 57., 55., 55., 55., 55., 55., 48.],
columns = ['Weekly Avg Temp'],
index = pd.date_range('1/1/2012', periods=52, freq='W'))
#Your code goes here
p15_mean = np.mean(p15_df['Weekly Avg Temp'])
p15_std = np.std(p15_df['Weekly Avg Temp'])
print "Palo Alto Weekly Temp Mean: ", p15_mean
print "Palo Alto Weekly Temp Std: ", p15_std
Explanation: b. Temperature in Palo Alto
Find the mean and standard deviation of Palo Alto weekly average temperature data for the year of 2015 stored in p15_df.
End of explanation
b16_df = pd.DataFrame([ 26., 22., 20., 19., 18., 19., 17., 17., 19., 20., 23., 22., 28., 28., 35., 38., 42., 47., 49., 56., 59., 61.,
61., 70., 73., 73., 73., 77., 78., 82., 80., 80., 81., 78., 82., 78., 76., 71., 69., 66., 60., 63., 56., 50.,
44., 43., 34., 33., 31., 28., 27., 20.],
columns = ['Weekly Avg Temp'],
index = pd.date_range('1/1/2012', periods=52, freq='W'))
p16_df = pd.DataFrame([ 50., 50., 51., 48., 48., 49., 50., 45., 52., 50., 51., 52., 50., 56., 58., 55., 61., 56., 61., 62., 62., 64.,
64., 69., 71., 66., 69., 70., 68., 71., 70., 69., 72., 71., 66., 69., 70., 70., 66., 67., 64., 64., 65., 61.,
61., 59., 56., 53., 55., 52., 52., 51.],
columns = ['Weekly Avg Temp'],
index = pd.date_range('1/1/2012', periods=52, freq='W'))
#Your code goes here
b16_df.plot.hist(title = "Boston 2016 Temperature vs. Prediction");
plt.axvline(b15_mean);
p16_df.plot.hist(title = "Palo Alto 2016 Temperature vs. Prediction");
plt.axvline(p15_mean);
b_avg_error = np.mean(abs(b16_df['Weekly Avg Temp'] - b15_mean))
p_avg_error = np.mean(abs(p16_df['Weekly Avg Temp'] - p15_mean))
print "Avg of Absolute Value of Prediction Error in Boston:", b_avg_error
print "Avg of Absolute Value of Prediction Error in Palo Alto:", p_avg_error
print "\nWe know from parts a and b that the weather in Boston is much more variable than that of Palo Alto. As a result, we can predict that an estimate based on a sample mean in Boston will be less accurate than an estimate based on a sample from Palo Alto, which is confirmed by this test. The Palo Alto predictions had a much lower error than those of Boston. With mean alone we would not have been able to make any conclusions about the accuracy of our predictions."
Explanation: c. Predicting 2016 Temperatures
Use the means you found in parts a and b to attempt to predict 2016 temperature data for both cities. Do this by creating two histograms for the 2016 temperature data in b16_df and p16_df with a vertical line where the 2015 means were to represent your prediction.
End of explanation |
762 | Given the following text description, write Python code to implement the functionality described.
Description:
Reduce the array to a single element with the given operation
Function to return the final element ; Driver code
| Python Code:
def getFinalElement(n ) :
finalNum = 2
while finalNum * 2 <= n :
finalNum *= 2
return finalNum
if __name__== "__main __":
N = 12
print(getFinalElement(N ) )
|
763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to NLP with NLTK
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the natural language world - unstructured data that by its very nature has latent information that is important to humans. NLP practioners have benefited from machine learning techniques to unlock meaning from large corpora, and in this class we’ll explore how to do that particularly with Python and with the Natural Language Toolkit (NLTK).
NLTK is an excellent library for machine-learning based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
Quick Overview of NLTK
NLTK stands for the Natural Language Toolkit and is written by two eminent computational linguists, Steven Bird (Senior Research Associate of the LDC and professor at the University of Melbourne) and Ewan Klein (Professor of Linguistics at Edinburgh University). NTLK provides a combination of natural language corpora, lexical resources, and example grammars with language processing algorithms, methodologies and demonstrations for a very pythonic "batteries included" view of Natural Language Processing.
As such, NLTK is perfect for researh driven (hypothesis driven) workflows for agile data science. Its suite of libraries includes
Step1: This will open up a window with which you can download the various corpora and models to a specified location. For now, go ahead and download it all as we will be exploring as much of NLTK as we can. Also take note of the download_directory - you're going to want to know where that is so you can get a detailed look at the corpora that's included. I usually export an enviornment variable to track this
Step2: The nltk.text.Text class is a wrapper around a sequence of simple (string) tokens - intended only for the initial exploration of text usually via the Python REPL. It has the following methods
Step3: Given some context surrounding a word, we can discover similar words, e.g. words that that occur frequently in the same context and with a similar distribution
Step4: As you can see, this takes a bit of time to build the index in memory, one of the reasons it's not suggested to use this class in production code. Now that we can do searching and similarity, find the common contexts of a set of words
Step5: your turn, go ahead and explore similar words and contexts - what does the common context mean?
NLTK also uses matplotlib and pylab to display graphs and charts that can show dispersions and frequency. This is especially interesting for the corpus of innagural addresses given by U.S. presidents.
Step6: To explore much of the built in corpus, use the following methods
Step7: These corpora export several vital methods
Step8: Your turn! Explore some of the text in the available corpora
Frequency Analyses
In statistical machine learning approaches to NLP, the very first thing we need to do is count things - especially the unigrams that appear in the text and their relationships to each other. NLTK provides two very excellent classes to enable these frequency analyses
Step9: Your turn
Step10: Preprocessing Text
NLTK is great at the preprocessing of Raw text - it provides the following tools for dividing text into it's constituent parts
Step11: All of these taggers work pretty well - but you can (and should train them on your own corpora).
Stemming and Lemmatization
We have an immense number of word forms as you can see from our various counts in the FreqDist above - it is helpful for many applications to normalize these word forms (especially applications like search) into some canonical word for further exploration. In English (and many other languages) - mophological context indicate gender, tense, quantity, etc. but these sublties might not be necessary
Step12: Note that the lemmatizer has to load the WordNet corpus which takes a bit.
Typical normalization of text for use as features in machine learning models looks something like this
Step13: Named Entity Recognition
NLTK has an excellent MaxEnt backed Named Entity Recognizer that is trained on the Penn Treebank. You can also retrain the chunker if you'd like - the code is very readable to extend it with a Gazette or otherwise.
Step14: You can also wrap the Stanford NER system, which many of you are also probably used to using. | Python Code:
import nltk
nltk.download()
Explanation: Introduction to NLP with NLTK
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the natural language world - unstructured data that by its very nature has latent information that is important to humans. NLP practioners have benefited from machine learning techniques to unlock meaning from large corpora, and in this class we’ll explore how to do that particularly with Python and with the Natural Language Toolkit (NLTK).
NLTK is an excellent library for machine-learning based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
Quick Overview of NLTK
NLTK stands for the Natural Language Toolkit and is written by two eminent computational linguists, Steven Bird (Senior Research Associate of the LDC and professor at the University of Melbourne) and Ewan Klein (Professor of Linguistics at Edinburgh University). NTLK provides a combination of natural language corpora, lexical resources, and example grammars with language processing algorithms, methodologies and demonstrations for a very pythonic "batteries included" view of Natural Language Processing.
As such, NLTK is perfect for researh driven (hypothesis driven) workflows for agile data science. Its suite of libraries includes:
tokenization, stemming, and tagging
chunking and parsing
language modeling
classification and clustering
logical semantics
NLTK is a useful pedagogical resource for learning NLP with Python and serves as a starting place for producing production grade code that requires natural language analysis. It is also important to understand what NLTK is not:
Production ready out of the box
Lightweight
Generally applicable
Magic
NLTK provides a variety of tools that can be used to explore the linguistic domain but is not a lightweight dependency that can be easily included in other workflows, especially those that require unit and integration testing or other build processes. This stems from the fact that NLTK includes a lot of added code but also a rich and complete library of corpora that power the built-in algorithms.
The Good parts of NLTK
Preprocessing
segmentation
tokenization
PoS tagging
Word level processing
WordNet
Lemmatization
Stemming
NGrams
Utilities
Tree
FreqDist
ConditionalFreqDist
Streaming CorpusReaders
Classification
Maximum Entropy
Naive Bayes
Decision Tree
Chunking
Named Entity Recognition
Parsers Galore!
The Bad parts of NLTK
Syntactic Parsing
No included grammar (not a black box)
No Feature/Dependency Parsing
No included feature grammar
The sem package
Toy only (lambda-calculus & first order logic)
Lots of extra stuff (heavyweight dependency)
papers, chat programs, alignments, etc.
Knowing the good and the bad parts will help you explore NLTK further - looking into the source code to extract the material you need, then moving that code to production. We will explore NLTK in more detail in the rest of this notebook.
Installing NLTK
This notebook has a few dependencies, most of which can be installed via the python package manger - pip.
Python 2.7 or later (anaconda is ok)
NLTK
The NLTK corpora
The BeautifulSoup library
The gensim libary
Once you have Python and pip installed you can install NLTK as follows:
~$ pip install nltk
~$ pip install matplotlib
~$ pip install beautifulsoup4
~$ pip install gensim
Note that these will also install Numpy and Scipy if they aren't already installed.
To download the corpora, open a python interperter:
End of explanation
moby = nltk.text.Text(nltk.corpus.gutenberg.words('melville-moby_dick.txt'))
Explanation: This will open up a window with which you can download the various corpora and models to a specified location. For now, go ahead and download it all as we will be exploring as much of NLTK as we can. Also take note of the download_directory - you're going to want to know where that is so you can get a detailed look at the corpora that's included. I usually export an enviornment variable to track this:
~$ export NLTK_DATA=/path/to/nltk_data
Take a moment to explore what is in this directory
Working with Example Corpora
NLTK ships with a variety of corpora, let's use a few of them to do some work. Get access to the text from Moby Dick as follows:
End of explanation
moby.concordance("monstrous", 55, lines=10)
Explanation: The nltk.text.Text class is a wrapper around a sequence of simple (string) tokens - intended only for the initial exploration of text usually via the Python REPL. It has the following methods:
common_contexts
concordance
collocations
count
plot
findall
index
You shouldn't use this class in production level systems, but it is useful to explore (small) snippets of text in a meaningful fashion.
The corcordance function performs a search for the given token and then also provides the surrounding context:
End of explanation
print moby.similar("ahab")
austen = nltk.text.Text(nltk.corpus.gutenberg.words('austen-sense.txt'))
print
print austen.similar("monstrous")
Explanation: Given some context surrounding a word, we can discover similar words, e.g. words that that occur frequently in the same context and with a similar distribution: Distributional similarity:
End of explanation
moby.common_contexts(["ahab", "starbuck"])
Explanation: As you can see, this takes a bit of time to build the index in memory, one of the reasons it's not suggested to use this class in production code. Now that we can do searching and similarity, find the common contexts of a set of words:
End of explanation
inaugural = nltk.text.Text(nltk.corpus.inaugural.words())
inaugural.dispersion_plot(["citizens", "democracy", "freedom", "duties", "America"])
Explanation: your turn, go ahead and explore similar words and contexts - what does the common context mean?
NLTK also uses matplotlib and pylab to display graphs and charts that can show dispersions and frequency. This is especially interesting for the corpus of innagural addresses given by U.S. presidents.
End of explanation
# Lists the various corpora and CorpusReader classes in the nltk.corpus module
for name in dir(nltk.corpus):
if name.islower() and not name.startswith('_'): print name
# For a specific corpus, list the fileids that are available:
print nltk.corpus.shakespeare.fileids()
print nltk.corpus.gutenberg.fileids()
print nltk.corpus.stopwords.fileids()
nltk.corpus.stopwords.words('english')
import string
print string.punctuation
Explanation: To explore much of the built in corpus, use the following methods:
End of explanation
corpus = nltk.corpus.brown
print corpus.paras()
print corpus.sents()
print corpus.words()
print corpus.raw()[:200] # Be careful!
Explanation: These corpora export several vital methods:
paras (iterate through each paragraph)
sents (iterate through each sentence)
words (iterate through each word)
raw (get access to the raw text)
End of explanation
reuters = nltk.corpus.reuters # Corpus of news articles
counts = nltk.FreqDist(reuters.words())
vocab = len(counts.keys())
words = sum(counts.values())
lexdiv = float(words) / float(vocab)
print "Corpus has %i types and %i tokens for a lexical diversity of %0.3f" % (vocab, words, lexdiv)
counts.B()
print counts.most_common(40) # The n most common tokens in the corpus
print counts.max() # The most frequent token in the corpus
print counts.hapaxes()[0:10] # A list of all hapax legomena
counts.freq('stipulate') * 100 # percentage of the corpus for this token
counts.plot(200, cumulative=False)
from itertools import chain
brown = nltk.corpus.brown
categories = brown.categories()
counts = nltk.ConditionalFreqDist(chain(*[[(cat, word) for word in brown.words(categories=cat)] for cat in categories]))
for category, dist in counts.items():
vocab = len(dist.keys())
tokens = sum(dist.values())
lexdiv = float(tokens) / float(vocab)
print "%s: %i types with %i tokens and lexical diveristy of %0.3f" % (category, vocab, tokens, lexdiv)
Explanation: Your turn! Explore some of the text in the available corpora
Frequency Analyses
In statistical machine learning approaches to NLP, the very first thing we need to do is count things - especially the unigrams that appear in the text and their relationships to each other. NLTK provides two very excellent classes to enable these frequency analyses:
FreqDist
ConditionalFreqDist
And these two classes serve as the foundation for most of the probability and statistical analyses that we will conduct.
First we will compute the following:
The count of words
The vocabulary (unique words)
The lexical diversity (the ratio of word count to vocabulary)
End of explanation
for ngram in nltk.ngrams(["The", "bear", "walked", "in", "the", "woods", "at", "midnight"], 5):
print ngram
Explanation: Your turn: compute the conditional frequency distribution of bigrams in a corpus
Hint:
End of explanation
text = u"Medical personnel returning to New York and New Jersey from the Ebola-riddled countries in West Africa will be automatically quarantined if they had direct contact with an infected person, officials announced Friday. New York Gov. Andrew Cuomo (D) and New Jersey Gov. Chris Christie (R) announced the decision at a joint news conference Friday at 7 World Trade Center. “We have to do more,” Cuomo said. “It’s too serious of a situation to leave it to the honor system of compliance.” They said that public-health officials at John F. Kennedy and Newark Liberty international airports, where enhanced screening for Ebola is taking place, would make the determination on who would be quarantined. Anyone who had direct contact with an Ebola patient in Liberia, Sierra Leone or Guinea will be quarantined. In addition, anyone who traveled there but had no such contact would be actively monitored and possibly quarantined, authorities said. This news came a day after a doctor who had treated Ebola patients in Guinea was diagnosed in Manhattan, becoming the fourth person diagnosed with the virus in the United States and the first outside of Dallas. And the decision came not long after a health-care worker who had treated Ebola patients arrived at Newark, one of five airports where people traveling from West Africa to the United States are encountering the stricter screening rules."
for sent in nltk.sent_tokenize(text):
print sent
print
for sent in nltk.sent_tokenize(text):
print list(nltk.wordpunct_tokenize(sent))
print
for sent in nltk.sent_tokenize(text):
print list(nltk.pos_tag(nltk.word_tokenize(sent)))
print
Explanation: Preprocessing Text
NLTK is great at the preprocessing of Raw text - it provides the following tools for dividing text into it's constituent parts:
sent_tokenize: a Punkt sentence tokenizer:
This tokenizer divides a text into a list of sentences, by using an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. It must be trained on a large collection of plaintext in the target language before it can be used.
However, Punkt is designed to learn parameters (a list of abbreviations, etc.) unsupervised from a corpus similar to the target domain. The pre-packaged models may therefore be unsuitable: use PunktSentenceTokenizer(text) to learn parameters from the given text.
word_tokenize: a Treebank tokenizer
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This is the method that is invoked by word_tokenize(). It assumes that the text has already been segmented into sentences, e.g. using sent_tokenize().
pos_tag: a maximum entropy tagger trained on the Penn Treebank
There are several other taggers including (notably) the BrillTagger as well as the BrillTrainer to train your own tagger or tagset.
End of explanation
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.lancaster import LancasterStemmer
from nltk.stem.porter import PorterStemmer
text = list(nltk.word_tokenize("The women running in the fog passed bunnies working as computer scientists."))
snowball = SnowballStemmer('english')
lancaster = LancasterStemmer()
porter = PorterStemmer()
for stemmer in (snowball, lancaster, porter):
stemmed_text = [stemmer.stem(t) for t in text]
print " ".join(stemmed_text)
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemmas = [lemmatizer.lemmatize(t) for t in text]
print " ".join(lemmas)
Explanation: All of these taggers work pretty well - but you can (and should train them on your own corpora).
Stemming and Lemmatization
We have an immense number of word forms as you can see from our various counts in the FreqDist above - it is helpful for many applications to normalize these word forms (especially applications like search) into some canonical word for further exploration. In English (and many other languages) - mophological context indicate gender, tense, quantity, etc. but these sublties might not be necessary:
Stemming = chop off affixes to get the root stem of the word:
running --> run
flowers --> flower
geese --> geese
Lemmatization = look up word form in a lexicon to get canonical lemma
women --> woman
foxes --> fox
sheep --> sheep
There are several stemmers available:
- Lancaster (English, newer and aggressive)
- Porter (English, original stemmer)
- Snowball (Many langauges, newest)
The Lemmatizer uses the WordNet lexicon
End of explanation
import string
## Module constants
lemmatizer = WordNetLemmatizer()
stopwords = set(nltk.corpus.stopwords.words('english'))
punctuation = string.punctuation
def normalize(text):
for token in nltk.word_tokenize(text):
token = token.lower()
token = lemmatizer.lemmatize(token)
if token not in stopwords and token not in punctuation:
yield token
print list(normalize("The eagle flies at midnight."))
Explanation: Note that the lemmatizer has to load the WordNet corpus which takes a bit.
Typical normalization of text for use as features in machine learning models looks something like this:
End of explanation
print nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize("John Smith is from the United States of America and works at Microsoft Research Labs")))
Explanation: Named Entity Recognition
NLTK has an excellent MaxEnt backed Named Entity Recognizer that is trained on the Penn Treebank. You can also retrain the chunker if you'd like - the code is very readable to extend it with a Gazette or otherwise.
End of explanation
import os
from nltk.tag import StanfordNERTagger
# change the paths below to point to wherever you unzipped the Stanford NER download file
stanford_root = '/Users/benjamin/Development/stanford-ner-2014-01-04'
stanford_data = os.path.join(stanford_root, 'classifiers/english.all.3class.distsim.crf.ser.gz')
stanford_jar = os.path.join(stanford_root, 'stanford-ner-2014-01-04.jar')
st = StanfordNERTagger(stanford_data, stanford_jar, 'utf-8')
for i in st.tag("John Bengfort is from the United States of America and works at Microsoft Research Labs".split()):
print '[' + i[1] + '] ' + i[0]
Explanation: You can also wrap the Stanford NER system, which many of you are also probably used to using.
End of explanation |
764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Hofstadter Butterfly </center>
We generate a fractal like-structure, called the Hofstadter Butterfly, which represents the energy
levels of an electron travelling through a periodic lattice under the influence of a
magnetic field.
The mathematical model related to the Hamiltonian of an electron in a two dimensional lattice,
subject to a perpendicular (uniform) magnetic field is the Almost Mathieu (AM) operator or Harper operator, which is
a discrete one-dimensional operator that acts on the Hilbert space, $\ell^2(\mathbb{Z})$, of the infinite sequences. It is defined by
Step1: We generate the Hofstadter butterfly associated to the Harper operator corresponding to K=2, $\theta=0$ and s=1 (these parameters were used by Hofstadter himself in the first numerical computation
of data).
Step2: At the first sight the effective computation of the spectrum for many Harper matrices seems to be cumbersome. But with Anaconda Python packages it is very fast, because Anaconda packaged MKL-powered binary versions of some of the most popular numerical/scientific Python libraries into MKL Optimizations that improve performance. (MKL stands for Intel™ Math Kernel Library, a set of vectorized math routines that accelerate math functions).
Let us test get_butterfly_points_even()
Step3: Generating the butterfly data, with the function get_butterfly_points_odd() and then via get_butterfly_points_even(), corresponding both to the same s, we get two indistinguishable plots.
The user can experiment plotting the Hofstadter butterfly associated to both data, succesively, or to their union.
Step4: An alternative is to send the figure to Plotly cloud
Step5: It's naturally to ask yourself how behaves the spectrum of the associated Harper-type matrix, $H(p,q, s=-1)$.
Taking into account the interlacing property mentioned above we expect to get also a butterfly.
Let us plot it
Step6: The two butterflies look very similar. Even if we plot them in the same figure, we cannot distinguish the gold points from the silver ones,
because the distance between two consecutive eigenvalues, one in H(p,q, s=1) and another in H(p,q, s=-1) is very small.
For example
Step7: Define data for plotting both above Hofstadter butterflies on the same figure. | Python Code:
import platform
print(f'Python version: {platform.python_version()}')
import plotly
plotly.__version__
import plotly.graph_objs as go
import numpy as np
from numpy import pi
Explanation: <center> Hofstadter Butterfly </center>
We generate a fractal like-structure, called the Hofstadter Butterfly, which represents the energy
levels of an electron travelling through a periodic lattice under the influence of a
magnetic field.
The mathematical model related to the Hamiltonian of an electron in a two dimensional lattice,
subject to a perpendicular (uniform) magnetic field is the Almost Mathieu (AM) operator or Harper operator, which is
a discrete one-dimensional operator that acts on the Hilbert space, $\ell^2(\mathbb{Z})$, of the infinite sequences. It is defined by:
$$(H_{\Phi, K, \theta}u)n=u{n+1}+u_{n-1}+K\cos(n\Phi +\theta) u_n, \quad\Phi, K, \theta\in\mathbb{R}$$
When the magnetic flux penetrating the lattice corresponds to a rational number $p/q$, i.e. $\Phi=2\pi p/q$, with $p,q$ relative prime integers, the spectrum of the above operator consists in $q$ bands (closed intervals) separated by gaps
(J. Avron, P. H. M. v. Mouche, B. Simon, On the Measure of the Spectrum
for the Almost Mathieu Operator, Commun Math Phys 132 (1990), 103-118).
For every irrational $\Phi$, and parameter $K>0$, the spectrum of the AM operator is a Cantor set
(A Avila, S Jitomirskaya, The Ten Martini Problem, Annals of math 170 (2009), 303-342).
For a flux $\Phi=2\pi n p/q$, corresponding to a rational number, the potential $V_\theta(n)=K\cos(2\pi n p/q+\theta)$
is periodic and the eigenvalue problem:
$$(H_{\Phi, K, \theta}u)_n=E u_n$$
reduces to a matrix eigenvalue problem associated to the following periodic Jacobi matrices, called Harper matrices:
$$
Ha(p, q, K, \theta, s)=\left(\begin{array}{ccccccc}K\cos(2\pi 0 p/q+\theta)&1 &0&\ldots&0& 0& s\
1& K\cos(2\pi p/q+\theta)&1&\ldots&0&0&0\
\vdots&\vdots&\vdots&\ldots&\vdots&\vdots&\vdots\
0&0&0&\ldots&1&K\cos(2\pi (q-2) p/q+\theta)&1\
s&0&0&\ldots&0&1&K\cos(2\pi (q-1) p/q+\theta)\end{array}\right)$$
with $s=\pm 1$.
More precisely, the spectrum of the operator, $\sigma(H_{2\pi p/q, K, \theta})$ is a union of intervals (bands) whose ends are the interlacing eigenvalues of the two Harper type matrices $Ha(p,q, K, \theta, 1)$, $Ha(p,q, K, \theta, -1)$.
The eigenvalues $E_i$, respectively $E'_i$, $i=0, 1, \ldots, q-1$, of the two matrices can be ordered as follows:
$$E_{2i} < E_{2i+1}\leq E_{2i+2}, i\geq 0$$
respectively:
$$E'{2i}\leq E'{2i+1} < E'_{2i+2}, i\geq 0$$
and the two series are interlaced:
$$E_0 < E'_0 \leq E'_1 < E_1\leq E_2<E'_2\leq \cdots$$
The Hofstadter butterfly was defined and studied by the physicist Douglas Hofstadter in 1976. It is a graphical representation of all possible energies (eigenvalues) of the Harper matrices $H(p,q,s=1)$ corresponding to the rational values $p/q$ in [0,1).
Hence to get the Hofstadter butterfly we have to plot all points of coordinates, $(p/q, E_i)$, with $p/q\in [0,1)$, $i=0, 1, \ldots q-1$. For each $p/q$, $E_i$ runs over the q eigenvalues of the Harper matrix, $Ha(p, q, s)$.
For any $q<qmax$ we should compute the eigenvalues of all matrices $Ha(p, q, s=1)$, with $p\in {1, 2, \ldots q-1}$, such that
$p, q$ are relative prime numbers. But since $\cos$ is an odd $2\pi$-periodic function, we have that
$$\cos(2\pi n p/q)=\cos(2\pi n(q-p)/q),$$ and thus
$$Ha(p, q, s)=Ha(q-p, s)$$.
Hence only the spectrum of the Harper matrices $Ha(p, q, s)$, with
$p\in{1, 2, \ldots, q//2}$, if $q$ is odd, respectively $p\in{1, 2, \ldots, q/2-1}$, if $q$ is even, are calculated.
End of explanation
def Gear(n, s=1):
# Generates a Gear-type matrix, i.e. a periodic Jacobi matrix G=(0,..0; 1,...1; +-1), with 0 on the principal diagonal
# 1 in the positions G[i][i+1], G[i-1][i], and G[0][n-1], G[n-1][0]=s with s=1 or -1
G=np.diag(np.ones(n - 1), -1) + np.diag(np.ones(n - 1), 1)
G[0][n-1]=s
G[n-1][0]=s
return G
def eigs_Harper(p, q, s, K):
d=[K*np.cos(2*np.pi*m*p/q) for m in range(q)] #define the diagonal of the Harper matrix Ha(p,q)
Hd= np.diag(d)
G = Gear(q, s)
return list(np.linalg.eigvalsh(Hd+G))#eigenvalues of the Harper matrix
def gcd(a, b): # Greatest Common Divisor
if b == 0: return a
return gcd(b, a % b)
def get_butterfly_points_even(qmax=101, s=1, K=2):# for qmax=101 value we define 1036 irreducible fractions p/q,
#and compute the eigvals for 1036/2=518 Harper matrices
phi=[]# the list of of rational magnetic flux values, p/q
E=[]# the list of energies
text=[]# the list of hover strings
#take all rational numbers p/q of even denominator, q<qmax
for q in range(4, qmax, 2):
for p in range(1, q//2, 2):
if gcd(p, q) == 1:
phi.extend([p/q]*q+ [(q-p)/q]*q) #insert q copies of p/q, respectively (q-p)/q,
#because the corresponding Harper matrix H(p,q), resp H(q-p, p), has q eigvals
eigs_pq=eigs_Harper(p, q, s, K)
E.extend(eigs_pq*2)
p_text=[f"(p, q) = {(p,q)}"]*q+[f"(p,q) = {(q-p, q)}"]*q
text.extend([f"{t}<br>E = {round(e, 3)}" for t, e in zip(p_text, eigs_pq*2)])
return phi, E, text
def get_butterfly_points_odd(qmax=70, s=1, K=2):
phi=[]
E=[]
text=[]
#take all rational numbers p/q of odd denominator, q<qmax
for q in range(5, qmax, 1):
for p in range(1, q//2+1, 1):
if gcd(p, q) == 1:
phi.extend([p/q]*q+ [(q-p)/q]*q)
eigs_pq=eigs_Harper(p, q, s, K)
E.extend(eigs_pq*2)
p_text=[f"(p, q) = {(p,q)}"]*q+[f"(p,q) = {(q-p, q)}"]*q
text.extend([f"{t}<br>E = {round(e, 3)}" for t, e in zip(p_text, eigs_pq*2)])
return phi, E, text
def get_butterfly_trace(phi, E, text, color='rgb(255,215, 0)', marker_size=1):
return dict(type='scatter',
x=phi,
y=E,
mode='markers',
text=text,
marker=dict(color=color, size=marker_size),
hoverinfo='text')
Explanation: We generate the Hofstadter butterfly associated to the Harper operator corresponding to K=2, $\theta=0$ and s=1 (these parameters were used by Hofstadter himself in the first numerical computation
of data).
End of explanation
%time phi1, E1, text1=get_butterfly_points_even(qmax=101)
len(E1)#points are plotted
Explanation: At the first sight the effective computation of the spectrum for many Harper matrices seems to be cumbersome. But with Anaconda Python packages it is very fast, because Anaconda packaged MKL-powered binary versions of some of the most popular numerical/scientific Python libraries into MKL Optimizations that improve performance. (MKL stands for Intel™ Math Kernel Library, a set of vectorized math routines that accelerate math functions).
Let us test get_butterfly_points_even():
End of explanation
data=[get_butterfly_trace(phi1, E1, text1)]
axis_style=dict(showline=True,
mirror=True,
zeroline=False,
showgrid=False,
ticklen=4)
layout=dict(title='Hofstadter butterfly<br> K=2, s=1',
font=dict(family='Balto'),
width=600, height=675,
autosize=False,
showlegend=False,
xaxis=dict(axis_style, **dict( title='Phi (magnetic flux)', dtick=0.25)),
yaxis=dict(axis_style, **dict( title='E (Energy)')),
hovermode='closest',
plot_bgcolor='rgb(10,10,10)')
fw=go.FigureWidget(data=data, layout=layout)
fw # running this cell the FigureWidget is plotted in the next one
Explanation: Generating the butterfly data, with the function get_butterfly_points_odd() and then via get_butterfly_points_even(), corresponding both to the same s, we get two indistinguishable plots.
The user can experiment plotting the Hofstadter butterfly associated to both data, succesively, or to their union.
End of explanation
import plotly.plotly as py
import warnings
warnings.filterwarnings("ignore")
py.sign_in('empet', '')
py.iplot(fw, filename='Hofstadter1')
Explanation: An alternative is to send the figure to Plotly cloud:
End of explanation
phi_m1, E_m1, text_m1=get_butterfly_points_even(qmax=101, s=-1)
data1=[get_butterfly_trace(phi_m1, E_m1, text_m1, color='rgb(192,192,192)')]
fw1=go.FigureWidget(data=data1, layout=layout)
fw1.layout.title='Hofstadter butterfly<br> s = -1, K=2'
fw1
py.iplot(fw1, filename='Hofstadterm1')
Explanation: It's naturally to ask yourself how behaves the spectrum of the associated Harper-type matrix, $H(p,q, s=-1)$.
Taking into account the interlacing property mentioned above we expect to get also a butterfly.
Let us plot it:
End of explanation
eigs_Harper(3, 10, 1, 2)
eigs_Harper(3, 10, -1, 2)
Explanation: The two butterflies look very similar. Even if we plot them in the same figure, we cannot distinguish the gold points from the silver ones,
because the distance between two consecutive eigenvalues, one in H(p,q, s=1) and another in H(p,q, s=-1) is very small.
For example:
End of explanation
data_global=[get_butterfly_trace(phi1, E1, text1, color='rgb(192,192,192)'),
get_butterfly_trace(phi_m1, E_m1, text_m1, color='rgb(255,215,0)')]
fw_global=go.FigureWidget(data=data_global, layout=layout)
fw_global.layout.title='Hofstadter Butterfly'
fw_global
Explanation: Define data for plotting both above Hofstadter butterflies on the same figure.
End of explanation |
765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with IUCN data in shapefiles
just some logging/plotting magic to output in this notebook, nothing to care about.
Step1: 1. Load a shapefile with all turtles data. At this point no data cleaning is done yet.
Step2: Show only first 5 species (meta)data, to get an idea of the data structure.
Step3: 2. Filter species by the name given above
Step4: 3. Plot geometry
Plot the shapefile data, and a convex hull. GeoPandas objects also know how to plot themselves directly.
Step5: Let's put a buffer around the data, and plot that
Step6: The currently filtered shape data can be saved. If overwrite=True, the shapefile it was loaded from, will be overwritten. Otherwise you can provide a new shape_file as an argument.
Step7: 4. Rasterize
Rasterize the data
Step8: Or at some point later, if you want to load the raster file
Step9: A simple plot of the raster data | Python Code:
import logging
root = logging.getLogger()
root.addHandler(logging.StreamHandler())
%matplotlib inline
Explanation: Working with IUCN data in shapefiles
just some logging/plotting magic to output in this notebook, nothing to care about.
End of explanation
# download http://bit.ly/1R8pt20 (zipped Turtles shapefiles), and unzip them
from iSDM.species import IUCNSpecies
turtles = IUCNSpecies(name_species='Acanthochelys pallidipectoris')
turtles.load_shapefile('../data/FW_TURTLES/FW_TURTLES.shp')
Explanation: 1. Load a shapefile with all turtles data. At this point no data cleaning is done yet.
End of explanation
turtles.get_data().head()
turtles.get_data().columns # all the columns available per species geometry
Explanation: Show only first 5 species (meta)data, to get an idea of the data structure.
End of explanation
turtles.find_species_occurrences()
turtles.get_data() # datatype: geopandas.geodataframe.GeoDataFrame
turtles.save_data() # serialize all the current data to a pickle file, so it can be loaded later on
turtles.load_data()
turtles.ID # derived from "id_no" column. It's a sort of unique ID per species
Explanation: 2. Filter species by the name given above
End of explanation
turtles.get_data().plot()
turtles.data_full.geometry.convex_hull.plot()
Explanation: 3. Plot geometry
Plot the shapefile data, and a convex hull. GeoPandas objects also know how to plot themselves directly.
End of explanation
with_buffer = turtles.get_data().geometry.buffer(0.5)
with_buffer.plot()
Explanation: Let's put a buffer around the data, and plot that
End of explanation
turtles.save_shapefile(overwrite=True)
Explanation: The currently filtered shape data can be saved. If overwrite=True, the shapefile it was loaded from, will be overwritten. Otherwise you can provide a new shape_file as an argument.
End of explanation
turtles.rasterize_data(raster_file='./turtles.tif', pixel_size=0.5)
Explanation: 4. Rasterize
Rasterize the data: we need a target raster_file to save it to, and a resolution.
End of explanation
turtles_raster_data = turtles.load_raster_data()
turtles_raster_data.shape
type(turtles_raster_data)
Explanation: Or at some point later, if you want to load the raster file
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=turtles_raster_data.shape) # careful with big images!
plt.imshow(turtles_raster_data, cmap="hot", interpolation="none")
type(turtles_raster_data)
from osgeo import gdal, ogr
geo = gdal.Open("./turtles.tif")
geo.GetGCPs()
drv = geo.GetDriver()
geo.RasterXSize
geo.GetGeoTransform()
Explanation: A simple plot of the raster data
End of explanation |
766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
author
Step1: Remove eukaryotic sequences from Silva DB
Also change 'U' to 'T' to match DNA sequences from EMP
Step2: Search deblur against all (or at least a max of 1000 identical) hits
Commands are given in a qsub framework for submission to a Torque cluster
Step3: Counting stats for the more exhaustive search
Step4: Are matching sets nonredundant? | Python Code:
!source activate qiime
import re
import sys
Explanation: author: jonsan@gmail.com<br>
date: 9 Oct 2017<br>
language: Python 3.5<br>
license: BSD3<br>
matches_deblur_to_gg_silva.ipynb
End of explanation
def fix_silva(silva_fp, output_fp):
with open(output_fp, 'w') as f_o:
with open(silva_fp, 'r') as f_i:
is_target = False
for l in f:
if l.startswith('>'):
is_target = False
if l.split(' ')[1].startswith('Bacteria'):
is_target = True
if l.split(' ')[1].startswith('Archaea'):
is_target = True
if is_target:
f_o.write(l.rstrip() + '\n')
else:
seq = l.rstrip()
if is_target:
f_o.write(seq.replace('U','T') + '\n')
return
silva_db_99_fp = 'SILVA_128_SSURef_Nr99_tax_silva.fasta'
silva_db_100_fp = 'SILVA_128_SSURef_tax_silva.fasta'
fix_silva(silva_db_99_fp, 'SILVA_128_SSURef_Nr99_tax_silva.prok.fasta')
fix_silva(silva_db_100_fp, 'SILVA_128_SSURef_tax_silva.prok.fasta')
Explanation: Remove eukaryotic sequences from Silva DB
Also change 'U' to 'T' to match DNA sequences from EMP
End of explanation
cmd = ('vsearch --usearch_global /projects/emp/03-otus/04-deblur/emp.90.min25.deblur.seq.fa '
'--id 1.0 '
'--maxaccepts 1000 '
'--maxrejects 32 '
'--db /home/jgsanders/ref_data/gg_13_8_otus/rep_set/99_otus.fasta '
'--uc ~/emp/mapping/Ghits_99_all.uc '
'--dbnotmatched /home/jgsanders/emp/mapping/dbs/gg_99_otus.unmatched.all.fasta '
'--dbmatched /home/jgsanders/emp/mapping/dbs/gg_99_otus.matched.all.fasta '
'--notmatched /home/jgsanders/emp/mapping/Ghits_99_unmatched.all.fasta '
'--matched /home/jgsanders/emp/mapping/Ghits_99_matched.all.fasta')
!echo "source activate qiime; $cmd" | qsub -k eo -N gg99 -l nodes=1:ppn=32 -l pmem=4gb -l walltime=12:00:00
cmd = ('vsearch --usearch_global /projects/emp/03-otus/04-deblur/emp.90.min25.deblur.seq.fa '
'--id 1.0 '
'--maxaccepts 1000 '
'--maxrejects 32 '
'--db /home/jgsanders/ref_data/gg_13_8_otus/gg_13_5.fasta '
'--uc ~/emp/mapping/Ghits_100_all.uc '
'--dbnotmatched /home/jgsanders/emp/mapping/dbs/gg_100_otus.unmatched.all.fasta '
'--dbmatched /home/jgsanders/emp/mapping/dbs/gg_100_otus.matched.all.fasta '
'--notmatched /home/jgsanders/emp/mapping/Ghits_100_unmatched.all.fasta '
'--matched /home/jgsanders/emp/mapping/Ghits_100_matched.all.fasta')
!echo "source activate qiime; $cmd" | qsub -k eo -N gg100 -l nodes=1:ppn=32 -l pmem=4gb -l walltime=12:00:00
cmd = ('vsearch --usearch_global /projects/emp/03-otus/04-deblur/emp.90.min25.deblur.seq.fa '
'--id 1.0 '
'--maxaccepts 1000 '
'--maxrejects 32 '
'--db /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_Nr99_tax_silva.prok.fasta '
'--uc ~/emp/mapping/Shits_99_all.uc '
'--dbnotmatched /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_Nr99_tax_silva.prok.unmatched.all.fasta '
'--dbmatched /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_Nr99_tax_silva.prok.matched.all.fasta '
'--notmatched /home/jgsanders/emp/mapping/Shits_99_unmatched.all.fasta '
'--matched /home/jgsanders/emp/mapping/Shits_99_matched.all.fasta')
!echo "source activate qiime; $cmd" | qsub -k eo -N silva99 -l nodes=1:ppn=32 -l pmem=4gb -l walltime=12:00:00
cmd = ('vsearch --usearch_global /projects/emp/03-otus/04-deblur/emp.90.min25.deblur.seq.fa '
'--id 1.0 '
'--maxaccepts 1000 '
'--maxrejects 32 '
'--db /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_tax_silva.prok.fasta '
'--uc ~/emp/mapping/Shits_100_all.uc '
'--dbnotmatched /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_tax_silva.prok.unmatched.all.fasta '
'--dbmatched /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_tax_silva.prok.matched.all.fasta '
'--notmatched /home/jgsanders/emp/mapping/Shits_100_unmatched.fasta '
'--notmatched /home/jgsanders/emp/mapping/Shits_100_unmatched.all.fasta '
'--matched /home/jgsanders/emp/mapping/Shits_100_matched.all.fasta')
!echo "source activate qiime; $cmd" | qsub -k eo -N silva100 -l nodes=1:ppn=32 -l pmem=4gb -l walltime=12:00:00
Explanation: Search deblur against all (or at least a max of 1000 identical) hits
Commands are given in a qsub framework for submission to a Torque cluster
End of explanation
#get number of original seqs
deblur_seqs = !grep -c '>' /projects/emp/03-otus/04-deblur/emp.90.min25.deblur.seq.fa
gg_99_seqs = !grep -c '>' /home/jgsanders/ref_data/gg_13_8_otus/rep_set/99_otus.fasta
silva_99_seqs = !grep -c '>' /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_Nr99_tax_silva.prok.fasta
gg_100_seqs = !grep -c '>' /home/jgsanders/ref_data/gg_13_8_otus/gg_13_5.fasta
silva_100_seqs = !grep -c '>' /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_tax_silva.prok.fasta
print('Deblur seqs: {0}\nGG 99 seqs: {1}\nSILVA 99 seqs: {2}\n'
'GG 100 seqs: {3}\nSILVA 100 seqs: {4}'.format(deblur_seqs[0], gg_99_seqs[0], silva_99_seqs[0],
gg_100_seqs[0], silva_100_seqs[0]))
#get number of gg 100 seqs matched
deblur_matched_gg = !grep -c '>' /home/jgsanders/emp/mapping/Ghits_100_matched.all.fasta
gg_matched_deblur = !grep -c '>' /home/jgsanders/emp/mapping/dbs/gg_100_otus.matched.all.fasta
print('GG 100 seqs with Deblur hits: {0} ({1:03.1f}%)'.format(gg_matched_deblur[0], float(gg_matched_deblur[0])/float(gg_100_seqs[0])*100))
print('Deblur seqs matching GG 100: {0} ({1:03.1f}%)'.format(deblur_matched_gg[0], float(deblur_matched_gg[0])/float(deblur_seqs[0])*100))
#get number of gg 99 seqs matched
deblur_matched_gg = !grep -c '>' /home/jgsanders/emp/mapping/Ghits_99_matched.all.fasta
gg_matched_deblur = !grep -c '>' /home/jgsanders/emp/mapping/dbs/gg_99_otus.matched.all.fasta
print('GG 99 seqs with Deblur hits: {0} ({1:03.1f}%)'.format(gg_matched_deblur[0], float(gg_matched_deblur[0])/float(gg_99_seqs[0])*100))
print('Deblur seqs matching GG 99: {0} ({1:03.1f}%)'.format(deblur_matched_gg[0], float(deblur_matched_gg[0])/float(deblur_seqs[0])*100))
#get number of silva 100 seqs matched
deblur_matched_silva = !grep -c '>' /home/jgsanders/emp/mapping/Shits_100_matched.all.fasta
silva_matched_deblur = !grep -c '>' /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_tax_silva.prok.matched.all.fasta
print('Silva 100 seqs with Deblur hits: {0} ({1:03.1f}%)'.format(silva_matched_deblur[0], float(silva_matched_deblur[0])/float(silva_100_seqs[0])*100))
print('Deblur seqs matching Silva 100: {0} ({1:03.1f}%)'.format(deblur_matched_silva[0], float(deblur_matched_silva[0])/float(deblur_seqs[0])*100))
#get number of silva 99 seqs matched
deblur_matched_silva = !grep -c '>' /home/jgsanders/emp/mapping/Shits_99_matched.all.fasta
silva_matched_deblur = !grep -c '>' /home/jgsanders/emp/mapping/dbs/SILVA_128_SSURef_Nr99_tax_silva.prok.matched.all.fasta
print('Silva 99 seqs with Deblur hits: {0} ({1:03.1f}%)'.format(silva_matched_deblur[0], float(silva_matched_deblur[0])/float(silva_99_seqs[0])*100))
print('Deblur seqs matching Silva 99: {0} ({1:03.1f}%)'.format(deblur_matched_silva[0], float(deblur_matched_silva[0])/float(deblur_seqs[0])*100))
Explanation: Counting stats for the more exhaustive search
End of explanation
import pandas as pd
gg100_df = pd.read_csv('../Ghits_100_all.uc',sep='\t',header=None)
gg100_hits = set(gg100_df[8])
silva100_df = pd.read_csv('../Shits_100.uc',sep='\t',header=None)
silva100_hits = set(silva100_df[8])
len(gg100_hits)
len(silva100_hits)
len(gg100_hits | silva100_hits)
len(silva100_hits - gg100_hits)
len(gg100_hits - silva100_hits)
Explanation: Are matching sets nonredundant?
End of explanation |
767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting ready to implement the Schelling model
Goal for this assignment
The goal of this assignment is to finish up the two functions that you started in class on the first day of this project, to ensure that you're ready to hit the ground running when you get back to together with your group.
You are welcome to work with your group on this pre-class assignment - just make sure to list who you worked with below. Also, everybody needs to turn in their own solutions!
Your name
// put your name here!
Function 1
Step1: Function 2
Step3: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
# Put your code here, using additional cells if necessary.
Explanation: Getting ready to implement the Schelling model
Goal for this assignment
The goal of this assignment is to finish up the two functions that you started in class on the first day of this project, to ensure that you're ready to hit the ground running when you get back to together with your group.
You are welcome to work with your group on this pre-class assignment - just make sure to list who you worked with below. Also, everybody needs to turn in their own solutions!
Your name
// put your name here!
Function 1: Creating a game board
Function 1: Write a function that creates a one-dimensional game board composed of agents of two different types (0 and 1, X and O, stars and pluses... whatever you want), where the agents are assigned to spots randomly with a 50% chance of being either type. As arguments to the function, take in (1) the number of spots in the game board (setting the default to 32) and (2) a random seed that you will use to initialize the board (again with some default number), and return your game board. (Hint: which makes more sense to describe the game board, a list or a Numpy array? What are the tradeoffs?) Show that your function is behaving correctly by printing out the returned game board.
End of explanation
# Put your code here, using additional cells if necessary.
Explanation: Function 2: deciding if an agent is happy
Write a function that takes the game board generated by the function you wrote above and determines whether an agent at position i in the game board of a specified type is happy for a game board of any size and a neighborhood of size N (i.e., from position i-N to i+N), and returns that information. Make sure to check that position i is actually inside the game board (i.e., make sure the request makes sense), and ensure that it behaves correctly for agents near the edges of the game board. Show that your function is behaving correctly by giving having it check every position in the game board you generated previously, and decide whether the agent in each spot is happy or not. Verify by eye that it's behaving correctly. (Hint: You're going to use this later, when you're trying to decide where to put an agent. Should you write the function assuming that the agent is already in the board, or that you're testing to see whether or not you've trying to decide whether to put it there?)
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/M7YCyE1OLzyOK7gH3?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Esercizio 1
Si consideri il dataset delle precipitazioni mensili (mm) nei sette anni dal 2009 al 2015, che ha il seguente formato
Step1: 2) Definizione della funzione compute_mean()
Step2: 3) Definizione della funzione count_elements_greater_than()
Step3: 4) Definizione dei parametri di input
Step4: 5) Lettura del dataset nella lista delle sue righe
Step5: 6) Estrazione della lista degli anni
Step6: NOTA BENE
Step7: 7) Estrazione della lista dei mesi
Step8: 8) Costruzione della matrice dei valori (interi) di pioggia
a) Ottenere dalla lista delle righe del dataset la lista delle liste delle piogge mensili i oggetti di tipo intero.
Step9: b) Convertire la lista in matrice.
Step10: NOTA BENE
Step11: NOTA BENE
Step12: NOTA BENE
Step13: 11) Output delle precipitazioni totali annue
a) Calcolo della lista delle precipitazioni totali annue.
Step14: b) Costruzione della lista di output delle 12 tuple di dimensione 2 in cui ogni tupla contiene il nome dell'anno come primo elemento e la precipitazione totale come secondo elemento.
Step15: 12) Output del numero annuo di mesi con almeno threshold mm di pioggia
a) Calcolo della lista del numero annuo di mesi con almeno threshold mm di pioggia.
Step16: b) Costruzione della lista di output delle N=7 tuple di dimensione 2 in cui ogni tupla contiene il nome dell'anno come primo elemento e il numero di mesi con almeno threshold mm di pioggia come secondo elemento. | Python Code:
import numpy as np
Explanation: Esercizio 1
Si consideri il dataset delle precipitazioni mensili (mm) nei sette anni dal 2009 al 2015, che ha il seguente formato: 13 record di campi separati da tabulazione di cui il primo è il record di intestazione degli anni (composto da 7 campi) e gli altri 12 sono i record delle piogge mensili (un record per ciascun mese) composti da 8 campi di cui il primo è il nome del mese e i rimanenti 7 sono le piogge mensili lungo gli anni.
Si richiede di predisporre un notebook che permetta di calcolare:
le precipitazioni medie mensili
le precipitazioni totali annue
per ognuno degli anni considerati, il numero di mesi con pioggia oltre la soglia S
Parametri di input:
- dataset delle precipitazioni
- soglia S
Requisiti:
- il notebook deve funzionare anche per un dataset che contiene le rilevazioni per un numero di anni diverso da 7
definire la funzione compute_mean() che prenda in input una lista di numeri e produca in output il valore medio
definire la funzione count_elements_greater_than() che prenda in input una lista di numeri e un valore di soglia e produca in output il numero di valori che superano tale soglia
Come produrre l'output?
produrre le piogge medie mensili in una lista di 12 tuple di dimensione 2 in cui il primo elemento (stringa) sono le prime tre lettere del nome del mese in maiuscolo e il secondo elemento (decimale) è il suo valore medio di pioggia.
produrre le piogge totali annue in una lista di N (sarà N=7 per questo dataset) tuple di dimensione 2 in cui il primo elemento (stringa) è l'anno e il secondo elemento (intero) è il suo valore totale di pioggia.
produrre per ogni anno il numero di mesi con pioggia oltre la soglia P in una lista di N tuple di dimensione 2 in cui il primo elemento (stringa) è l'anno e il secondo elemento (intero) il numero di mesi con almeno S mm di pioggia
Soluzione
1) Importazione del modulo numpy
End of explanation
def compute_mean(list_of_numbers):
return float(sum(list_of_numbers))/len(list_of_numbers)
Explanation: 2) Definizione della funzione compute_mean()
End of explanation
def count_elements_greater_than(list_of_numbers, threshold):
bool_list = [number >= threshold for number in list_of_numbers]
return bool_list.count(True)
Explanation: 3) Definizione della funzione count_elements_greater_than()
End of explanation
input_file_name = './input-precipitazioni.txt'
threshold = 100
Explanation: 4) Definizione dei parametri di input
End of explanation
with open(input_file_name, 'r') as input_file:
file_rows = input_file.readlines()
file_rows
Explanation: 5) Lettura del dataset nella lista delle sue righe
End of explanation
years = file_rows.pop(0).rstrip().split()
years
Explanation: 6) Estrazione della lista degli anni
End of explanation
file_rows
Explanation: NOTA BENE: rimuovere la riga di intestazione degli anni dalla lista file_rows.
End of explanation
months = [row.rstrip().split()[0] for row in file_rows]
months
Explanation: 7) Estrazione della lista dei mesi
End of explanation
rains_per_month = [list(map(int, row.rstrip().split()[1:])) for row in file_rows]
rains_per_month
Explanation: 8) Costruzione della matrice dei valori (interi) di pioggia
a) Ottenere dalla lista delle righe del dataset la lista delle liste delle piogge mensili i oggetti di tipo intero.
End of explanation
rains_per_month = np.array(rains_per_month)
rains_per_month
Explanation: b) Convertire la lista in matrice.
End of explanation
rains_per_year = rains_per_month.transpose()
rains_per_year
Explanation: NOTA BENE: ogni riga della matrice contiene tutte le piogge annue di un determinato mese.
9) Calcolo della trasposta della matrice dei valori (interi) di pioggia
End of explanation
monthly_averages = [compute_mean(rain_list) for rain_list in rains_per_month]
monthly_averages
Explanation: NOTA BENE: ogni riga della matrice trasposta contiene tutte le piogge mensili di un determinato anno.
10) Output delle precipitazioni medie mensili
a) Calcolo della lista delle precipitazioni medie mensili.
End of explanation
months = [month[:3].upper() for month in months]
monthly_output = list(zip(months, monthly_averages))
monthly_output
Explanation: NOTA BENE: il valore i-esimo nella lista è la media relativa al mese i-esimo.
b) Costruzione della lista di output delle 12 tuple di dimensione 2 in cui ogni tupla contiene le prime tre lettere del nome del mese in maiuscolo come primo elemento e la media mensile come secondo elemento.
End of explanation
yearly_total = [sum(rain_list) for rain_list in rains_per_year]
yearly_total
Explanation: 11) Output delle precipitazioni totali annue
a) Calcolo della lista delle precipitazioni totali annue.
End of explanation
yearly_output1 = list(zip(years, yearly_total))
yearly_output1
Explanation: b) Costruzione della lista di output delle 12 tuple di dimensione 2 in cui ogni tupla contiene il nome dell'anno come primo elemento e la precipitazione totale come secondo elemento.
End of explanation
yearly_count = [count_elements_greater_than(rain_list, threshold) for rain_list in rains_per_year]
yearly_count
Explanation: 12) Output del numero annuo di mesi con almeno threshold mm di pioggia
a) Calcolo della lista del numero annuo di mesi con almeno threshold mm di pioggia.
End of explanation
yearly_output2 = list(zip(years, yearly_count))
yearly_output2
Explanation: b) Costruzione della lista di output delle N=7 tuple di dimensione 2 in cui ogni tupla contiene il nome dell'anno come primo elemento e il numero di mesi con almeno threshold mm di pioggia come secondo elemento.
End of explanation |
769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: 1. Define the Sweep
Weights & Biases sweeps give you powerful levers to configure your sweeps exactly how you want them, with just a few lines of code. The sweeps config can be defined as a dictionary or a YAML file.
Let's walk through some of them together
Step2: 2. Initialize the Sweep
Step3: Define Your Neural Network
Before we can run the sweep, let's define a function that creates and trains our neural network.
In the function below, we define a simplified version of a VGG19 model in Keras, and add the following lines of code to log models metrics, visualize performance and output and track our experiments easily
Step4: 3. Run the sweep agent | Python Code:
# WandB – Install the W&B library
%pip install wandb -q
import wandb
from wandb.keras import WandbCallback
!pip install wandb -qq
from keras.datasets import fashion_mnist
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dropout, Dense, Flatten
from keras.utils import np_utils
from keras.optimizers import SGD
from keras.optimizers import RMSprop, SGD, Adam, Nadam
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, Callback, EarlyStopping
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
import wandb
from wandb.keras import WandbCallback
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
labels=["T-shirt/top","Trouser","Pullover","Dress","Coat",
"Sandal","Shirt","Sneaker","Bag","Ankle boot"]
img_width=28
img_height=28
X_train = X_train.astype('float32') / 255.
X_test = X_test.astype('float32') / 255.
# reshape input data
X_train = X_train.reshape(X_train.shape[0], img_width, img_height, 1)
X_test = X_test.reshape(X_test.shape[0], img_width, img_height, 1)
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
Explanation: <a href="https://colab.research.google.com/github/lukas/ml-class/blob/master/examples/keras-fashion/sweeps.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Introduction to Hyperparameter Sweeps
Searching through high dimensional hyperparameter spaces to find the most performant model can get unwieldy very fast. Hyperparameter sweeps provide an organized and efficient way to conduct a battle royale of models and pick the most accurate model. They enable this by automatically searching through combinations of hyperparameter values (e.g. learning rate, batch size, number of hidden layers, optimizer type) to find the most optimal values.
In this tutorial we'll see how you can run sophisticated hyperparameter sweeps in 3 easy steps using Weights and Biases.
Sweeps: An Overview
Running a hyperparameter sweep with Weights & Biases is very easy. There are just 3 simple steps:
Define the sweep: we do this by creating a dictionary or a YAML file that specifies the parameters to search through, the search strategy, the optimization metric et all.
Initialize the sweep: with one line of code we initialize the sweep and pass in the dictionary of sweep configurations:
sweep_id = wandb.sweep(sweep_config)
Run the sweep agent: also accomplished with one line of code, we call wandb.agent() and pass the sweep_id to run, along with a function that defines your model architecture and trains it:
wandb.agent(sweep_id, function=train)
And voila! That's all there is to running a hyperparameter sweep! In the notebook below, we'll walk through these 3 steps in more detail.
We highly encourage you to fork this notebook, tweak the parameters, or try the model with your own dataset!
Resources
Sweeps docs →
Launching from the command line →
Setup
Start out by installing the experiment tracking library and setting up your free W&B account:
pip install wandb – Install the W&B library
import wandb – Import the wandb library
End of explanation
# Configure the sweep – specify the parameters to search through, the search strategy, the optimization metric et all.
sweep_config = {
'method': 'random', #grid, random
'metric': {
'name': 'accuracy',
'goal': 'maximize'
},
'parameters': {
'epochs': {
'values': [2, 5, 10]
},
'batch_size': {
'values': [256, 128, 64, 32]
},
'dropout': {
'values': [0.3, 0.4, 0.5]
},
'conv_layer_size': {
'values': [16, 32, 64]
},
'weight_decay': {
'values': [0.0005, 0.005, 0.05]
},
'learning_rate': {
'values': [1e-2, 1e-3, 1e-4, 3e-4, 3e-5, 1e-5]
},
'optimizer': {
'values': ['adam', 'nadam', 'sgd', 'rmsprop']
},
'activation': {
'values': ['relu', 'elu', 'selu', 'softmax']
}
}
}
Explanation: 1. Define the Sweep
Weights & Biases sweeps give you powerful levers to configure your sweeps exactly how you want them, with just a few lines of code. The sweeps config can be defined as a dictionary or a YAML file.
Let's walk through some of them together:
* Metric – This is the metric the sweeps are attempting to optimize. Metrics can take a name (this metric should be logged by your training script) and a goal (maximize or minimize).
* Search Strategy – Specified using the 'method' variable. We support several different search strategies with sweeps.
* Grid Search – Iterates over every combination of hyperparameter values.
* Random Search – Iterates over randomly chosen combinations of hyperparameter values.
* Bayesian Search – Creates a probabilistic model that maps hyperparameters to probability of a metric score, and chooses parameters with high probability of improving the metric. The objective of Bayesian optimization is to spend more time in picking the hyperparameter values, but in doing so trying out fewer hyperparameter values.
* Stopping Criteria – The strategy for determining when to kill off poorly peforming runs, and try more combinations faster. We offer several custom scheduling algorithms like HyperBand and Envelope.
* Parameters – A dictionary containing the hyperparameter names, and discreet values, max and min values or distributions from which to pull their values to sweep over.
You can find a list of all configuration options here.
End of explanation
# Initialize a new sweep
# Arguments:
# – sweep_config: the sweep config dictionary defined above
# – entity: Set the username for the sweep
# – project: Set the project name for the sweep
sweep_id = wandb.sweep(sweep_config, entity="sweep", project="sweeps-tutorial")
Explanation: 2. Initialize the Sweep
End of explanation
# The sweep calls this function with each set of hyperparameters
def train():
# Default values for hyper-parameters we're going to sweep over
config_defaults = {
'epochs': 5,
'batch_size': 128,
'weight_decay': 0.0005,
'learning_rate': 1e-3,
'activation': 'relu',
'optimizer': 'nadam',
'hidden_layer_size': 128,
'conv_layer_size': 16,
'dropout': 0.5,
'momentum': 0.9,
'seed': 42
}
# Initialize a new wandb run
wandb.init(config=config_defaults)
# Config is a variable that holds and saves hyperparameters and inputs
config = wandb.config
# Define the model architecture - This is a simplified version of the VGG19 architecture
model = Sequential()
# Set of Conv2D, Conv2D, MaxPooling2D layers with 32 and 64 filters
model.add(Conv2D(filters = config.conv_layer_size, kernel_size = (3, 3), padding = 'same',
activation ='relu', input_shape=(img_width, img_height,1)))
model.add(Dropout(config.dropout))
model.add(Conv2D(filters = config.conv_layer_size, kernel_size = (3, 3),
padding = 'same', activation ='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(config.hidden_layer_size, activation ='relu'))
model.add(Dense(num_classes, activation = "softmax"))
# Define the optimizer
if config.optimizer=='sgd':
optimizer = SGD(lr=config.learning_rate, decay=1e-5, momentum=config.momentum, nesterov=True)
elif config.optimizer=='rmsprop':
optimizer = RMSprop(lr=config.learning_rate, decay=1e-5)
elif config.optimizer=='adam':
optimizer = Adam(lr=config.learning_rate, beta_1=0.9, beta_2=0.999, clipnorm=1.0)
elif config.optimizer=='nadam':
optimizer = Nadam(lr=config.learning_rate, beta_1=0.9, beta_2=0.999, clipnorm=1.0)
model.compile(loss = "categorical_crossentropy", optimizer = optimizer, metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=config.batch_size,
epochs=config.epochs,
validation_data=(X_test, y_test),
callbacks=[WandbCallback(data_type="image", validation_data=(X_test, y_test), labels=labels),
EarlyStopping(patience=10, restore_best_weights=True)])
Explanation: Define Your Neural Network
Before we can run the sweep, let's define a function that creates and trains our neural network.
In the function below, we define a simplified version of a VGG19 model in Keras, and add the following lines of code to log models metrics, visualize performance and output and track our experiments easily:
* wandb.init() – Initialize a new W&B run. Each run is single execution of the training script.
* wandb.config – Save all your hyperparameters in a config object. This lets you use our app to sort and compare your runs by hyperparameter values.
* callbacks=[WandbCallback()] – Fetch all layer dimensions, model parameters and log them automatically to your W&B dashboard.
* wandb.log() – Logs custom objects – these can be images, videos, audio files, HTML, plots, point clouds etc. Here we use wandb.log to log images of Simpson characters overlaid with actual and predicted labels.
End of explanation
# Initialize a new sweep
# Arguments:
# – sweep_id: the sweep_id to run - this was returned above by wandb.sweep()
# – function: function that defines your model architecture and trains it
wandb.agent(sweep_id, train)
Explanation: 3. Run the sweep agent
End of explanation |
770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing color-color tracks of the stellar templates
The goal of this notebook is to compare the stellar loci (in various color-color spaces) of the (theoretical) templates to the observed loci.
Step1: Read a random sweep, select stars, and correct the observed fluxes for reddening
Step6: Load the filter curves, the stellar templates, and get synthetic colors.
Step7: Generate color-color plots. | Python Code:
import os
import numpy as np
import fitsio
import matplotlib.pyplot as plt
from speclite import filters
from astropy import constants
import astropy.units as u
from desisim.io import read_basis_templates
import seaborn as sns
%pylab inline
sns.set(style='white', font_scale=1.8, font='sans-serif', palette='Set2')
setcolors = sns.color_palette()
Explanation: Comparing color-color tracks of the stellar templates
The goal of this notebook is to compare the stellar loci (in various color-color spaces) of the (theoretical) templates to the observed loci.
End of explanation
def read_and_dered():
bright, faint = 18, 19.5
sweepfile = 'sweep-240p000-250p005.fits'
print('Reading {}...'.format(sweepfile))
cat = fitsio.read(sweepfile, ext=1, upper=True)
these = np.where( (np.char.strip(cat['TYPE'].astype(str)) == 'PSF') *
(cat['DECAM_FLUX'][..., 2] > 1e9 * 10**(-0.4*faint)) *
(cat['DECAM_FLUX'][..., 2] < 1e9 * 10**(-0.4*bright))
)[0]
cat = cat[these]
print('...and selected {} stars with {} < r < {}.'.format(len(cat), bright, faint))
for prefix in ('DECAM', 'WISE'):
cat['{}_FLUX'.format(prefix)] = ( cat['{}_FLUX'.format(prefix)] /
cat['{}_MW_TRANSMISSION'.format(prefix)] )
cat['{}_FLUX_IVAR'.format(prefix)] = ( cat['{}_FLUX_IVAR'.format(prefix)] *
cat['{}_MW_TRANSMISSION'.format(prefix)]**2 )
return cat
cat = read_and_dered()
Explanation: Read a random sweep, select stars, and correct the observed fluxes for reddening
End of explanation
def obsflux2colors(cat):
Convert observed DECam/WISE fluxes to magnitudes and colors.
cc = dict()
with warnings.catch_warnings(): # ignore missing fluxes (e.g., for QSOs)
warnings.simplefilter('ignore')
for ii, band in zip((1, 2, 4), ('g', 'r', 'z')):
cc[band] = 22.5 - 2.5 * np.log10(cat['DECAM_FLUX'][..., ii].data)
for ii, band in zip((0, 1), ('W1', 'W2')):
cc[band] = 22.5 - 2.5 * np.log10(cat['WISE_FLUX'][..., ii].data)
cc['gr'] = cc['g'] - cc['r']
cc['gz'] = cc['g'] - cc['z']
cc['rz'] = cc['r'] - cc['z']
cc['rW1'] = cc['r'] - cc['W1']
cc['zW1'] = cc['z'] - cc['W1']
cc['W1W2'] = cc['W1'] - cc['W2']
return cc
def synthflux2colors(synthflux):
Convert the synthesized DECam/WISE fluxes to colors.
cc = dict(
r = 22.5 - 2.5 * np.log10(synthflux[1, :]),
gr = -2.5 * np.log10(synthflux[0, :] / synthflux[1, :]),
rz = -2.5 * np.log10(synthflux[1, :] / synthflux[2, :]),
gz = -2.5 * np.log10(synthflux[0, :] / synthflux[2, :]),
rW1 = -2.5 * np.log10(synthflux[1, :] / synthflux[3, :]),
zW1 = -2.5 * np.log10(synthflux[2, :] / synthflux[3, :]),
)
return cc
def star_synthflux():
Read the DESI stellar templates and synthesize photometry.
flux, wave, meta = read_basis_templates(objtype='STAR')
nt = len(meta)
print('Read {} DESI templates.'.format(nt))
phot = filt.get_ab_maggies(flux, wave, mask_invalid=False)
synthflux = np.vstack( [phot[ff].data for ff in filts] )
return synthflux
def pickles_synthflux():
Read the Pickles+98 stellar templates and synthesize photometry.
picklefile = os.path.join(os.getenv('CATALOGS_DIR'), '98pickles', '98pickles.fits')
data = fitsio.read(picklefile, ext=1)
print('Read {} Pickles templates.'.format(len(data)))
wave = data['WAVE'][0, :]
flux = data['FLUX']
padflux, padwave = filt.pad_spectrum(flux, wave, method='edge')
phot = filt.get_ab_maggies(padflux, padwave, mask_invalid=False)
synthflux = np.vstack( [phot[ff].data for ff in filts] )
return synthflux
filts = ('decam2014-g', 'decam2014-r', 'decam2014-z', 'wise2010-W1', 'wise2010-W2')
filt = filters.load_filters(*filts)
starcol = synthflux2colors(star_synthflux())
picklecol = synthflux2colors(pickles_synthflux())
obscol = obsflux2colors(cat)
Explanation: Load the filter curves, the stellar templates, and get synthetic colors.
End of explanation
grrange = (-0.6, 2.2)
gzrange = (0.0, 4.0)
rzrange = (-0.6, 2.8)
zW1range = (-2.5, 0.0)
def grz(pngfile=None):
fig, ax = plt.subplots(figsize=(10, 6))
if False:
hb = ax.scatter(obscol['rz'], obscol['gr'], c=obscol['r'], s=1,
edgecolor='none')
else:
hb = ax.hexbin(obscol['rz'], obscol['gr'], mincnt=5,
bins='log', gridsize=150)
ax.scatter(picklecol['rz'], picklecol['gr'], marker='s',
s=40, linewidth=1, alpha=0.5, label='Pickles+98', c='r')
ax.scatter(starcol['rz'], starcol['gr'], marker='o',
s=10, linewidth=1, alpha=0.8, label='STAR Templates', c='b')
ax.set_xlabel('r - z')
ax.set_ylabel('g - r')
ax.set_xlim(rzrange)
ax.set_ylim(grrange)
lgnd = ax.legend(loc='upper left', frameon=False, fontsize=18)
lgnd.legendHandles[0]._sizes = [100]
lgnd.legendHandles[1]._sizes = [100]
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of 18<r<19.5 Stars per Bin)')
if pngfile:
fig.savefig(pngfile)
def gzW1(pngfile=None):
fig, ax = plt.subplots(figsize=(10, 6))
hb = ax.hexbin(obscol['zW1'], obscol['gz'], mincnt=10,
bins='log', gridsize=150)
ax.scatter(starcol['zW1'], starcol['gz'], marker='o',
s=10, alpha=0.5, label='STAR Templates', c='b')
ax.set_xlabel('z - W1')
ax.set_ylabel('g - z')
ax.set_ylim(gzrange)
ax.set_xlim(zW1range)
lgnd = ax.legend(loc='upper left', frameon=False, fontsize=18)
lgnd.legendHandles[0]._sizes = [100]
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of 18<r<19.5 Stars per Bin)')
if pngfile:
fig.savefig(pngfile)
gzW1()
grz()
Explanation: Generate color-color plots.
End of explanation |
771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AFM-MINER
Tech Review
Garrett, Jessica, and Wesley
Step1: <a id='visualize'></a>
Finding objects in the data
Step2: Cons of the edge-based method and the OpenCV Canny edge dector
Step3: Correlating two 2-D arrays
Does as the name suggests
Use
Helpful if you are working with images that are generated from 2-D arrays.
Drawbacks
Not sure | Python Code:
import cv2
import numpy as np
import scipy.io
import scipy.optimize
from scipy import stats
import matplotlib.pyplot as plt
from matplotlib import gridspec
import pandas
#import magni
import math
from PIL import Image
#import seaborn as sns; sns.set()
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
%matplotlib inline
Explanation: AFM-MINER
Tech Review
Garrett, Jessica, and Wesley
End of explanation
img = cv2.imread('height.jpg',0)
#Python: cv.Canny(image, edges, threshold1, threshold2, aperture_size=3) → None
edges = cv2.Canny(img,0,20,3)
plt.subplot(121),plt.imshow(img,cmap = 'gray')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(edges,cmap = 'gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.show()
Explanation: <a id='visualize'></a>
Finding objects in the data: Using the Canny edge detector in OpenCV
Edge vs region based techniques:
* Edge detection based methods of finding objects focus on the expectation of a gradient at the object's edge and the background
* Region based methods focus on the object itself
How the Canny edge dection algorithm works:
1. Use a convolution operator to remove noise. In this case we apply a 5x5 guassian operator/filter.
* The idea behind filter-based approaches to reducing noise is to consider a window of surrounding pixels and use a combination of their values to replace the current pixel.
2. Use a convolution method to find edge gradients and angles. That is, weight the discrete sum of the image with another discrete function, the spatial mask. In this case, the Sobel operator is the spatial mask. OpenCV rounds this angle as one of four directions; the gradient is always perpendicular to the edge direction
3. Use non-maximum suppression to turn pixel values to zero if they don't exist at a local maximum in its neighborhood in the direction of the gradient. An effective method of thinning the edge
4. Perform hysteresis thresholding. Rather than instatiating a cut off for gradients values, pixels greater than a max value are binned as edges, pixels less than a min value are binned as not edges, and anything between is classified as an edge it is connected to max value binned edges.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider, RadioButtons
amplitude_slider = FloatSlider(min=0.1, max=1.0, step=0.1, value=0.2)
color_buttons = RadioButtons(options=['blue', 'green', 'red'])
@interact(amplitude=amplitude_slider, color=color_buttons)
def plot(amplitude, color):
fig, ax = plt.subplots(figsize=(4, 3),
subplot_kw={'axisbg':'#EEEEEE',
'axisbelow':True})
ax.grid(color='w', linewidth=2, linestyle='solid')
x = np.linspace(0, 10, 1000)
ax.plot(x, amplitude * np.sin(x), color=color,
lw=5, alpha=0.4)
ax.set_xlim(0, 10)
ax.set_ylim(-1.1, 1.1)
import numpy as np
import scipy.io
import scipy.optimize
from scipy import stats
import matplotlib.pyplot as plt
from matplotlib import gridspec
from ipywidgets import interact, FloatSlider, RadioButtons, IntSlider
import pandas
import math
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
%matplotlib inline
def myround(x, base):
return (float(base) * round(float(x)/float(base)))
params = {
'lines.markersize' : 3,
'axes.labelsize': 10,
'font.size': 10,
'legend.fontsize': 10,
'xtick.labelsize': 10,
'ytick.labelsize': 10,
'text.usetex': False,
}
#plp.rcParams.update(params)
plt.rcParams.update(params)
Ht2 = np.loadtxt('./data/MABr.1.Ht.txt',skiprows=0, dtype=np.float64)
Po2 = np.loadtxt('./data/MABr.1.Po.txt',skiprows=0, dtype=np.float64)
Ph2 = np.loadtxt('./data/MABr.1.Ph.txt',skiprows=0, dtype=np.float64)
Am2 = np.loadtxt('./data/MABr.1.Am.txt',skiprows=0, dtype=np.float64)
Pl2 = np.loadtxt('./data/MABr.1.Pl.txt',skiprows=0, dtype=np.float64)
# flatten the images
Ht2_flat = Ht2.flatten()
Po2_flat = Po2.flatten()
Ph2_flat = Ph2.flatten()
Am2_flat = Am2.flatten()
Pl2_flat = Pl2.flatten()
plt.show()
X = [Ht2_flat, Po2_flat, Ph2_flat, Am2_flat]
X = np.array(X).T
Y = np.array(Pl2_flat).T
Xtrain = np.array([Ht2_flat[0:31625], Po2_flat[0:31625], Ph2_flat[0:31625], Am2_flat[0:31625]]).T
Xtest = np.array([Ht2_flat[31625:], Po2_flat[31625:], Ph2_flat[31625:], Am2_flat[31625:]]).T
Ytrain = np.array(Pl2_flat[0:31625])
Ytest = np.array(Pl2_flat[31625:])
depth_slider = IntSlider(min=1, max=20, step=1, value=2)
@interact(Depth=depth_slider,continuous_update=False)
def plot(Depth):
clf = DecisionTreeRegressor(max_depth=Depth)
clf.fit(Xtrain, Ytrain)
Ypred = clf.predict(Xtest)
x = Ht2.shape[0]
y = Ht2.shape[1]
k=0
merge = np.concatenate((Ytrain,Ypred))
Pl_predict = np.zeros((x,y))
for i in range(x):
for j in range (y):
Pl_predict[i,j] = merge[k]
k = k + 1
fig = plt.figure(figsize=(8,6))
pl_ax = fig.add_subplot(121)
pl_ax.imshow(Pl_predict, cmap='viridis')
pl_ax.set_title('Photoluminescence')
pl_ax.axis('off')
pl_ax = fig.add_subplot(122)
cax = pl_ax.imshow(Pl2, cmap='viridis')
pl_ax.set_title('Photoluminescence')
pl_ax.axis('off')
fig.colorbar(cax)
Explanation: Cons of the edge-based method and the OpenCV Canny edge dector:
* Success is dependent on how well the object is separated from the background (the severity of the generated pixel gradient at the edge)
* Can't adjust the gaussian filter, where there is a trade-off between size of the filter and reduction in noise. Actually, Canny had first suggested using different values of sigma and resulting edge images be integrated for the final result. Larger sigmas capture coarser details in the image.
* Can't employ other filters such as the wavelet-based approaches
Pros
* Can adjust the size of the sobel kernel, high and low pass filters
Widgets for GUI-type thing
module : ipywidgets
- can create a bunch of sliders or buttons to manipulate and vary different parameters
Pros
- Looks clean
- Easy to implement
- Many different options and functions
We haven't found a ton of drawbacks for it as of now.
End of explanation
from scipy import signal
from scipy import misc
import matplotlib.pyplot as plt
import numpy as np
#get racoon face
face=misc.face(gray=True) - misc.face(gray=True).mean()
#plt.imshow(face)
#plt.show()
#copy right eye of racoom face
template=np.copy(face[300:365, 670:750])
template-=template.mean()
#adds noise to face
#np.random.randn returns samples from the standard normal distribution with the the same shape as face.shape.
face=face+np.random.randn(*face.shape)*50
#correlate the face and the eye
corr=signal.correlate2d(face, template, boundary='symm', mode='same')
#finds where the two images match
#np.argmax gives indices of the maximum value along a specified axes
#np.unravel_index converts a flat index or array of flat indices into a tuple of coordinate arrays. ?
y,x=np.unravel_index(np.argmax(corr), corr.shape)
#show the match
%matplotlib inline
fig, (ax_orig, ax_template, ax_corr)=plt.subplots(3,1,figsize=(6,15))
ax_orig.imshow(face, cmap='gray')
ax_orig.set_title('Original')
ax_template.set_axis_off()
ax_template.imshow(template, cmap='gray')
ax_template.set_title('Template')
ax_template.set_axis_off()
ax_corr.imshow(corr, cmap='gray')
ax_corr.set_title('Cross-correlation')
ax_corr.set_axis_off()
ax_orig.plot(x,y,'ro')
Explanation: Correlating two 2-D arrays
Does as the name suggests
Use
Helpful if you are working with images that are generated from 2-D arrays.
Drawbacks
Not sure:
how np.unravel_index or np.argmax do/work
what is a "flax index" or "array of flat indices?"
what is an integer array?
Argmax "returns indices of maximum values along an axis," what does this mean? Why do you specifcy an axis with an integer? How are axes assigned in arrays?
End of explanation |
772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3. Imagined movement
In this tutorial we will look at imagined movement. Our movement is controlled in the motor cortex where there is an increased level of mu activity (8–12 Hz) when we perform movements. This is accompanied by a reduction of this mu activity in specific regions that deal with the limb that is currently moving. This decrease is called Event Related Desynchronization (ERD). By measuring the amount of mu activity at different locations on the motor cortex, we can determine which limb the subject is moving. Through mirror neurons, this effect also occurs when the subject is not actually moving his limbs, but merely imagining it.
Credits
The CSP code was originally written by Boris Reuderink of the Donders
Institute for Brain, Cognition and Behavior. It is part of his Python EEG
toolbox
Step1: Now we have the data in the following python variables
Step2: This is a large recording
Step3: Since the feature we're looking for (a decrease in $\mu$-activity) is a frequency feature, lets plot the PSD of the trials in a similar manner as with the SSVEP data. The code below defines a function that computes the PSD for each trial (we're going to need it again later on)
Step4: The function below plots the PSDs that are calculated with the above function. Since plotting it for 118 channels will clutter the display, it takes the indices of the desired channels as input, as well as some metadata to decorate the plot.
Step5: Lets put the plot_psd() function to use and plot three channels
Step6: A spike of mu activity can be seen on each channel for both classes. At the right hemisphere, the mu for the left hand movement is lower than for the right hand movement due to the ERD. At the left electrode, the mu for the right hand movement is reduced and at the central electrode the mu activity is about equal for both classes. This is in line with the theory that the left hand is controlled by the left hemiphere and the feet are controlled centrally.
Classifying the data
We will use a machine learning algorithm to construct a model that can distinguish between the right hand and foot movement of this subject. In order to do this we need to
Step7: Plotting the PSD of the resulting trials_filt shows the suppression of frequencies outside the passband of the filter
Step8: As a feature for the classifier, we will use the logarithm of the variance of each channel. The function below calculates this
Step9: Below is a function to visualize the logvar of each channel as a bar chart
Step10: We see that most channels show a small difference in the log-var of the signal between the two classes. The next step is to go from 118 channels to only a few channel mixtures. The CSP algorithm calculates mixtures of channels that are designed to maximize the difference in variation between two classes. These mixures are called spatial filters.
Step11: To see the result of the CSP algorithm, we plot the log-var like we did before
Step12: Instead of 118 channels, we now have 118 mixtures of channels, called components. They are the result of 118 spatial filters applied to the data.
The first filters maximize the variation of the first class, while minimizing the variation of the second. The last filters maximize the variation of the second class, while minimizing the variation of the first.
This is also visible in a PSD plot. The code below plots the PSD for the first and last components as well as one in the middle
Step13: In order to see how well we can differentiate between the two classes, a scatter plot is a useful tool. Here both classes are plotted on a 2-dimensional plane
Step14: We will apply a linear classifier to this data. A linear classifier can be thought of as drawing a line in the above plot to separate the two classes. To determine the class for a new trial, we just check on which side of the line the trial would be if plotted as above.
The data is split into a train and a test set. The classifier will fit a model (in this case, a straight line) on the training set and use this model to make predictions about the test set (see on which side of the line each trial in the test set falls). Note that the CSP algorithm is part of the model, so for fairness sake it should be calculated using only the training data.
Step15: For a classifier the Linear Discriminant Analysis (LDA) algorithm will be used. It fits a gaussian distribution to each class, characterized by the mean and covariance, and determines an optimal separating plane to divide the two. This plane is defined as $r = W_0 \cdot X_0 + W_1 \cdot X_1 + \ldots + W_n \cdot X_n - b$, where $r$ is the classifier output, $W$ are called the feature weights, $X$ are the features of the trial, $n$ is the dimensionality of the data and $b$ is called the offset.
In our case we have 2 dimensional data, so the separating plane will be a line
Step16: Training the LDA using the training data gives us $W$ and $b$
Step17: It can be informative to recreate the scatter plot and overlay the decision boundary as determined by the LDA classifier. The decision boundary is the line for which the classifier output is exactly 0. The scatterplot used $X_0$ as $x$-axis and $X_1$ as $y$-axis. To find the function $y = f(x)$ describing the decision boundary, we set $r$ to 0 and solve for $y$ in the equation of the separating plane
Step18: The code below plots the boundary with the test data on which we will apply the classifier. You will see the classifier is going to make some mistakes.
Step19: Now the LDA is constructed and fitted to the training data. We can now apply it to the test data. The results are presented as a confusion matrix | Python Code:
%pylab inline
import numpy as np
import scipy.io
m = scipy.io.loadmat('data_set_IV/BCICIV_calib_ds1d.mat', struct_as_record=True)
# SciPy.io.loadmat does not deal well with Matlab structures, resulting in lots of
# extra dimensions in the arrays. This makes the code a bit more cluttered
sample_rate = m['nfo']['fs'][0][0][0][0]
EEG = m['cnt'].T
nchannels, nsamples = EEG.shape
channel_names = [s[0].encode('utf8') for s in m['nfo']['clab'][0][0][0]]
event_onsets = m['mrk'][0][0][0]
event_codes = m['mrk'][0][0][1]
labels = np.zeros((1, nsamples), int)
labels[0, event_onsets] = event_codes
cl_lab = [s[0].encode('utf8') for s in m['nfo']['classes'][0][0][0]]
cl1 = cl_lab[0]
cl2 = cl_lab[1]
nclasses = len(cl_lab)
nevents = len(event_onsets)
Explanation: 3. Imagined movement
In this tutorial we will look at imagined movement. Our movement is controlled in the motor cortex where there is an increased level of mu activity (8–12 Hz) when we perform movements. This is accompanied by a reduction of this mu activity in specific regions that deal with the limb that is currently moving. This decrease is called Event Related Desynchronization (ERD). By measuring the amount of mu activity at different locations on the motor cortex, we can determine which limb the subject is moving. Through mirror neurons, this effect also occurs when the subject is not actually moving his limbs, but merely imagining it.
Credits
The CSP code was originally written by Boris Reuderink of the Donders
Institute for Brain, Cognition and Behavior. It is part of his Python EEG
toolbox: https://github.com/breuderink/eegtools
Inspiration for this tutorial also came from the excellent code example
given in the book chapter:
Arnaud Delorme, Christian Kothe, Andrey Vankov, Nima Bigdely-Shamlo,
Robert Oostenveld, Thorsten Zander, and Scott Makeig. MATLAB-Based Tools
for BCI Research, In (B+H)CI: The Human in Brain-Computer Interfaces and
the Brain in Human-Computer Interaction. Desney S. Tan and Anton Nijholt
(eds.), 2009, 241-259, http://dx.doi.org/10.1007/978-1-84996-272-8
Obtaining the data
The dataset for this tutorial is provided by the fourth BCI competition,
which you will have to download youself. First, go to http://www.bbci.de/competition/iv/#download
and fill in your name and email address. An email will be sent to you
automatically containing a username and password for the download area.
Download Data Set 1, from Berlin, the 100Hz version in MATLAB format:
http://bbci.de/competition/download/competition_iv/BCICIV_1_mat.zip
and unzip it in a subdirectory called 'data_set_IV'. This subdirectory
should be inside the directory in which you've store the tutorial files.
Description of the data
If you've followed the instructions above, the following code should load
the data:
End of explanation
# Print some information
print 'Shape of EEG:', EEG.shape
print 'Sample rate:', sample_rate
print 'Number of channels:', nchannels
print 'Channel names:', channel_names
print 'Number of events:', len(event_onsets)
print 'Event codes:', np.unique(event_codes)
print 'Class labels:', cl_lab
print 'Number of classes:', nclasses
Explanation: Now we have the data in the following python variables:
End of explanation
# Dictionary to store the trials in, each class gets an entry
trials = {}
# The time window (in samples) to extract for each trial, here 0.5 -- 2.5 seconds
win = np.arange(int(0.5*sample_rate), int(2.5*sample_rate))
# Length of the time window
nsamples = len(win)
# Loop over the classes (right, foot)
for cl, code in zip(cl_lab, np.unique(event_codes)):
# Extract the onsets for the class
cl_onsets = event_onsets[event_codes == code]
# Allocate memory for the trials
trials[cl] = np.zeros((nchannels, nsamples, len(cl_onsets)))
# Extract each trial
for i, onset in enumerate(cl_onsets):
trials[cl][:,:,i] = EEG[:, win+onset]
# Some information about the dimensionality of the data (channels x time x trials)
print 'Shape of trials[cl1]:', trials[cl1].shape
print 'Shape of trials[cl2]:', trials[cl2].shape
Explanation: This is a large recording: 118 electrodes where used, spread across the entire scalp. The subject was given a cue and then imagined either right hand movement or the movement of his feet. As can be seen from the Homunculus, foot movement is controlled at the center of the motor cortex (which makes it hard to distinguish left from right foot), while hand movement is controlled more lateral.
Plotting the data
The code below cuts trials for the two classes and should look familiar if you've completed the previous tutorials. Trials are cut in the interval [0.5–2.5 s] after the onset of the cue.
End of explanation
from matplotlib import mlab
def psd(trials):
'''
Calculates for each trial the Power Spectral Density (PSD).
Parameters
----------
trials : 3d-array (channels x samples x trials)
The EEG signal
Returns
-------
trial_PSD : 3d-array (channels x PSD x trials)
the PSD for each trial.
freqs : list of floats
Yhe frequencies for which the PSD was computed (useful for plotting later)
'''
ntrials = trials.shape[2]
trials_PSD = np.zeros((nchannels, 101, ntrials))
# Iterate over trials and channels
for trial in range(ntrials):
for ch in range(nchannels):
# Calculate the PSD
(PSD, freqs) = mlab.psd(trials[ch,:,trial], NFFT=int(nsamples), Fs=sample_rate)
trials_PSD[ch, :, trial] = PSD.ravel()
return trials_PSD, freqs
# Apply the function
psd_r, freqs = psd(trials[cl1])
psd_f, freqs = psd(trials[cl2])
trials_PSD = {cl1: psd_r, cl2: psd_f}
Explanation: Since the feature we're looking for (a decrease in $\mu$-activity) is a frequency feature, lets plot the PSD of the trials in a similar manner as with the SSVEP data. The code below defines a function that computes the PSD for each trial (we're going to need it again later on):
End of explanation
import matplotlib.pyplot as plt
def plot_psd(trials_PSD, freqs, chan_ind, chan_lab=None, maxy=None):
'''
Plots PSD data calculated with psd().
Parameters
----------
trials : 3d-array
The PSD data, as returned by psd()
freqs : list of floats
The frequencies for which the PSD is defined, as returned by psd()
chan_ind : list of integers
The indices of the channels to plot
chan_lab : list of strings
(optional) List of names for each channel
maxy : float
(optional) Limit the y-axis to this value
'''
plt.figure(figsize=(12,5))
nchans = len(chan_ind)
# Maximum of 3 plots per row
nrows = np.ceil(nchans / 3)
ncols = min(3, nchans)
# Enumerate over the channels
for i,ch in enumerate(chan_ind):
# Figure out which subplot to draw to
plt.subplot(nrows,ncols,i+1)
# Plot the PSD for each class
for cl in trials.keys():
plt.plot(freqs, np.mean(trials_PSD[cl][ch,:,:], axis=1), label=cl)
# All plot decoration below...
plt.xlim(1,30)
if maxy != None:
plt.ylim(0,maxy)
plt.grid()
plt.xlabel('Frequency (Hz)')
if chan_lab == None:
plt.title('Channel %d' % (ch+1))
else:
plt.title(chan_lab[i])
plt.legend()
plt.tight_layout()
Explanation: The function below plots the PSDs that are calculated with the above function. Since plotting it for 118 channels will clutter the display, it takes the indices of the desired channels as input, as well as some metadata to decorate the plot.
End of explanation
plot_psd(
trials_PSD,
freqs,
[channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']],
chan_lab=['left', 'center', 'right'],
maxy=500
)
Explanation: Lets put the plot_psd() function to use and plot three channels:
C3: Central, left
Cz: Central, central
C4: Central, right
End of explanation
import scipy.signal
def bandpass(trials, lo, hi, sample_rate):
'''
Designs and applies a bandpass filter to the signal.
Parameters
----------
trials : 3d-array (channels x samples x trials)
The EEGsignal
lo : float
Lower frequency bound (in Hz)
hi : float
Upper frequency bound (in Hz)
sample_rate : float
Sample rate of the signal (in Hz)
Returns
-------
trials_filt : 3d-array (channels x samples x trials)
The bandpassed signal
'''
# The iirfilter() function takes the filter order: higher numbers mean a sharper frequency cutoff,
# but the resulting signal might be shifted in time, lower numbers mean a soft frequency cutoff,
# but the resulting signal less distorted in time. It also takes the lower and upper frequency bounds
# to pass, divided by the niquist frequency, which is the sample rate divided by 2:
a, b = scipy.signal.iirfilter(6, [lo/(sample_rate/2.0), hi/(sample_rate/2.0)])
# Applying the filter to each trial
ntrials = trials.shape[2]
trials_filt = np.zeros((nchannels, nsamples, ntrials))
for i in range(ntrials):
trials_filt[:,:,i] = scipy.signal.filtfilt(a, b, trials[:,:,i], axis=1)
return trials_filt
# Apply the function
trials_filt = {cl1: bandpass(trials[cl1], 8, 15, sample_rate),
cl2: bandpass(trials[cl2], 8, 15, sample_rate)}
Explanation: A spike of mu activity can be seen on each channel for both classes. At the right hemisphere, the mu for the left hand movement is lower than for the right hand movement due to the ERD. At the left electrode, the mu for the right hand movement is reduced and at the central electrode the mu activity is about equal for both classes. This is in line with the theory that the left hand is controlled by the left hemiphere and the feet are controlled centrally.
Classifying the data
We will use a machine learning algorithm to construct a model that can distinguish between the right hand and foot movement of this subject. In order to do this we need to:
find a way to quantify the amount of mu activity present in a trial
make a model that describes expected values of mu activity for each class
finally test this model on some unseen data to see if it can predict the correct class label
We will follow a classic BCI design by Blankertz et al. [1] where they use the logarithm of the variance of the signal in a certain frequency band as a feature for the classifier.
[1] Blankertz, B., Dornhege, G., Krauledat, M., Müller, K.-R., & Curio, G. (2007). The non-invasive Berlin Brain-Computer Interface: fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2), 539–550. doi:10.1016/j.neuroimage.2007.01.051
The script below designs a band pass filter using scipy.signal.irrfilter that will strip away frequencies outside the 8--15Hz window. The filter is applied to all trials:
End of explanation
psd_r, freqs = psd(trials_filt[cl1])
psd_f, freqs = psd(trials_filt[cl2])
trials_PSD = {cl1: psd_r, cl2: psd_f}
plot_psd(
trials_PSD,
freqs,
[channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']],
chan_lab=['left', 'center', 'right'],
maxy=300
)
Explanation: Plotting the PSD of the resulting trials_filt shows the suppression of frequencies outside the passband of the filter:
End of explanation
# Calculate the log(var) of the trials
def logvar(trials):
'''
Calculate the log-var of each channel.
Parameters
----------
trials : 3d-array (channels x samples x trials)
The EEG signal.
Returns
-------
logvar - 2d-array (channels x trials)
For each channel the logvar of the signal
'''
return np.log(np.var(trials, axis=1))
# Apply the function
trials_logvar = {cl1: logvar(trials_filt[cl1]),
cl2: logvar(trials_filt[cl2])}
Explanation: As a feature for the classifier, we will use the logarithm of the variance of each channel. The function below calculates this:
End of explanation
def plot_logvar(trials):
'''
Plots the log-var of each channel/component.
arguments:
trials - Dictionary containing the trials (log-vars x trials) for 2 classes.
'''
plt.figure(figsize=(12,5))
x0 = np.arange(nchannels)
x1 = np.arange(nchannels) + 0.4
y0 = np.mean(trials[cl1], axis=1)
y1 = np.mean(trials[cl2], axis=1)
plt.bar(x0, y0, width=0.5, color='b')
plt.bar(x1, y1, width=0.4, color='r')
plt.xlim(-0.5, nchannels+0.5)
plt.gca().yaxis.grid(True)
plt.title('log-var of each channel/component')
plt.xlabel('channels/components')
plt.ylabel('log-var')
plt.legend(cl_lab)
# Plot the log-vars
plot_logvar(trials_logvar)
Explanation: Below is a function to visualize the logvar of each channel as a bar chart:
End of explanation
from numpy import linalg
def cov(trials):
''' Calculate the covariance for each trial and return their average '''
ntrials = trials.shape[2]
covs = [ trials[:,:,i].dot(trials[:,:,i].T) / nsamples for i in range(ntrials) ]
return np.mean(covs, axis=0)
def whitening(sigma):
''' Calculate a whitening matrix for covariance matrix sigma. '''
U, l, _ = linalg.svd(sigma)
return U.dot( np.diag(l ** -0.5) )
def csp(trials_r, trials_f):
'''
Calculate the CSP transformation matrix W.
arguments:
trials_r - Array (channels x samples x trials) containing right hand movement trials
trials_f - Array (channels x samples x trials) containing foot movement trials
returns:
Mixing matrix W
'''
cov_r = cov(trials_r)
cov_f = cov(trials_f)
P = whitening(cov_r + cov_f)
B, _, _ = linalg.svd( P.T.dot(cov_f).dot(P) )
W = P.dot(B)
return W
def apply_mix(W, trials):
''' Apply a mixing matrix to each trial (basically multiply W with the EEG signal matrix)'''
ntrials = trials.shape[2]
trials_csp = np.zeros((nchannels, nsamples, ntrials))
for i in range(ntrials):
trials_csp[:,:,i] = W.T.dot(trials[:,:,i])
return trials_csp
# Apply the functions
W = csp(trials_filt[cl1], trials_filt[cl2])
trials_csp = {cl1: apply_mix(W, trials_filt[cl1]),
cl2: apply_mix(W, trials_filt[cl2])}
Explanation: We see that most channels show a small difference in the log-var of the signal between the two classes. The next step is to go from 118 channels to only a few channel mixtures. The CSP algorithm calculates mixtures of channels that are designed to maximize the difference in variation between two classes. These mixures are called spatial filters.
End of explanation
trials_logvar = {cl1: logvar(trials_csp[cl1]),
cl2: logvar(trials_csp[cl2])}
plot_logvar(trials_logvar)
Explanation: To see the result of the CSP algorithm, we plot the log-var like we did before:
End of explanation
psd_r, freqs = psd(trials_csp[cl1])
psd_f, freqs = psd(trials_csp[cl2])
trials_PSD = {cl1: psd_r, cl2: psd_f}
plot_psd(trials_PSD, freqs, [0,58,-1], chan_lab=['first component', 'middle component', 'last component'], maxy=0.75 )
Explanation: Instead of 118 channels, we now have 118 mixtures of channels, called components. They are the result of 118 spatial filters applied to the data.
The first filters maximize the variation of the first class, while minimizing the variation of the second. The last filters maximize the variation of the second class, while minimizing the variation of the first.
This is also visible in a PSD plot. The code below plots the PSD for the first and last components as well as one in the middle:
End of explanation
def plot_scatter(left, foot):
plt.figure()
plt.scatter(left[0,:], left[-1,:], color='b')
plt.scatter(foot[0,:], foot[-1,:], color='r')
plt.xlabel('Last component')
plt.ylabel('First component')
plt.legend(cl_lab)
plot_scatter(trials_logvar[cl1], trials_logvar[cl2])
Explanation: In order to see how well we can differentiate between the two classes, a scatter plot is a useful tool. Here both classes are plotted on a 2-dimensional plane: the x-axis is the first CSP component, the y-axis is the last.
End of explanation
# Percentage of trials to use for training (50-50 split here)
train_percentage = 0.5
# Calculate the number of trials for each class the above percentage boils down to
ntrain_r = int(trials_filt[cl1].shape[2] * train_percentage)
ntrain_f = int(trials_filt[cl2].shape[2] * train_percentage)
ntest_r = trials_filt[cl1].shape[2] - ntrain_r
ntest_f = trials_filt[cl2].shape[2] - ntrain_f
# Splitting the frequency filtered signal into a train and test set
train = {cl1: trials_filt[cl1][:,:,:ntrain_r],
cl2: trials_filt[cl2][:,:,:ntrain_f]}
test = {cl1: trials_filt[cl1][:,:,ntrain_r:],
cl2: trials_filt[cl2][:,:,ntrain_f:]}
# Train the CSP on the training set only
W = csp(train[cl1], train[cl2])
# Apply the CSP on both the training and test set
train[cl1] = apply_mix(W, train[cl1])
train[cl2] = apply_mix(W, train[cl2])
test[cl1] = apply_mix(W, test[cl1])
test[cl2] = apply_mix(W, test[cl2])
# Select only the first and last components for classification
comp = np.array([0,-1])
train[cl1] = train[cl1][comp,:,:]
train[cl2] = train[cl2][comp,:,:]
test[cl1] = test[cl1][comp,:,:]
test[cl2] = test[cl2][comp,:,:]
# Calculate the log-var
train[cl1] = logvar(train[cl1])
train[cl2] = logvar(train[cl2])
test[cl1] = logvar(test[cl1])
test[cl2] = logvar(test[cl2])
Explanation: We will apply a linear classifier to this data. A linear classifier can be thought of as drawing a line in the above plot to separate the two classes. To determine the class for a new trial, we just check on which side of the line the trial would be if plotted as above.
The data is split into a train and a test set. The classifier will fit a model (in this case, a straight line) on the training set and use this model to make predictions about the test set (see on which side of the line each trial in the test set falls). Note that the CSP algorithm is part of the model, so for fairness sake it should be calculated using only the training data.
End of explanation
def train_lda(class1, class2):
'''
Trains the LDA algorithm.
arguments:
class1 - An array (observations x features) for class 1
class2 - An array (observations x features) for class 2
returns:
The projection matrix W
The offset b
'''
nclasses = 2
nclass1 = class1.shape[0]
nclass2 = class2.shape[0]
# Class priors: in this case, we have an equal number of training
# examples for each class, so both priors are 0.5
prior1 = nclass1 / float(nclass1 + nclass2)
prior2 = nclass2 / float(nclass1 + nclass1)
mean1 = np.mean(class1, axis=0)
mean2 = np.mean(class2, axis=0)
class1_centered = class1 - mean1
class2_centered = class2 - mean2
# Calculate the covariance between the features
cov1 = class1_centered.T.dot(class1_centered) / (nclass1 - nclasses)
cov2 = class2_centered.T.dot(class2_centered) / (nclass2 - nclasses)
W = (mean2 - mean1).dot(np.linalg.pinv(prior1*cov1 + prior2*cov2))
b = (prior1*mean1 + prior2*mean2).dot(W)
return (W,b)
def apply_lda(test, W, b):
'''
Applies a previously trained LDA to new data.
arguments:
test - An array (features x trials) containing the data
W - The project matrix W as calculated by train_lda()
b - The offsets b as calculated by train_lda()
returns:
A list containing a classlabel for each trial
'''
ntrials = test.shape[1]
prediction = []
for i in range(ntrials):
# The line below is a generalization for:
# result = W[0] * test[0,i] + W[1] * test[1,i] - b
result = W.dot(test[:,i]) - b
if result <= 0:
prediction.append(1)
else:
prediction.append(2)
return np.array(prediction)
Explanation: For a classifier the Linear Discriminant Analysis (LDA) algorithm will be used. It fits a gaussian distribution to each class, characterized by the mean and covariance, and determines an optimal separating plane to divide the two. This plane is defined as $r = W_0 \cdot X_0 + W_1 \cdot X_1 + \ldots + W_n \cdot X_n - b$, where $r$ is the classifier output, $W$ are called the feature weights, $X$ are the features of the trial, $n$ is the dimensionality of the data and $b$ is called the offset.
In our case we have 2 dimensional data, so the separating plane will be a line: $r = W_0 \cdot X_0 + W_1 \cdot X_1 - b$. To determine a class label for an unseen trial, we can calculate whether the result is positive or negative.
End of explanation
W,b = train_lda(train[cl1].T, train[cl2].T)
print 'W:', W
print 'b:', b
Explanation: Training the LDA using the training data gives us $W$ and $b$:
End of explanation
# Scatterplot like before
plot_scatter(train[cl1], train[cl2])
title('Training data')
# Calculate decision boundary (x,y)
x = np.arange(-5, 1, 0.1)
y = (b - W[0]*x) / W[1]
# Plot the decision boundary
plt.plot(x,y, linestyle='--', linewidth=2, color='k')
plt.xlim(-5, 1)
plt.ylim(-2.2, 1)
Explanation: It can be informative to recreate the scatter plot and overlay the decision boundary as determined by the LDA classifier. The decision boundary is the line for which the classifier output is exactly 0. The scatterplot used $X_0$ as $x$-axis and $X_1$ as $y$-axis. To find the function $y = f(x)$ describing the decision boundary, we set $r$ to 0 and solve for $y$ in the equation of the separating plane:
<div style="width:600px">
$$\begin{align}
W_0 \cdot X_0 + W_1 \cdot X_1 - b &= r &&\text{the original equation} \\\
W_0 \cdot x + W_1 \cdot y - b &= 0 &&\text{filling in $X_0=x$, $X_1=y$ and $r=0$} \\\
W_0 \cdot x + W_1 \cdot y &= b &&\text{solving for $y$}\\\
W_1 \cdot y &= b - W_0 \cdot x \\\
\\\
y &= \frac{b - W_0 \cdot x}{W_1}
\end{align}$$
</div>
We first plot the decision boundary with the training data used to calculate it:
End of explanation
plot_scatter(test[cl1], test[cl2])
title('Test data')
plt.plot(x,y, linestyle='--', linewidth=2, color='k')
plt.xlim(-5, 1)
plt.ylim(-2.2, 1)
Explanation: The code below plots the boundary with the test data on which we will apply the classifier. You will see the classifier is going to make some mistakes.
End of explanation
# Print confusion matrix
conf = np.array([
[(apply_lda(test[cl1], W, b) == 1).sum(), (apply_lda(test[cl2], W, b) == 1).sum()],
[(apply_lda(test[cl1], W, b) == 2).sum(), (apply_lda(test[cl2], W, b) == 2).sum()],
])
print 'Confusion matrix:'
print conf
print
print 'Accuracy: %.3f' % (np.sum(np.diag(conf)) / float(np.sum(conf)))
Explanation: Now the LDA is constructed and fitted to the training data. We can now apply it to the test data. The results are presented as a confusion matrix:
<table>
<tr><td></td><td colspan='2' style="font-weight:bold">True labels →</td></tr>
<tr><td style="font-weight:bold">↓ Predicted labels</td><td>Right</td><td>Foot</td></tr>
<tr><td>Right</td><td></td><td></td></tr>
<tr><td>Foot</td><td></td><td></td></tr>
</table>
The number at the diagonal will be trials that were correctly classified, any trials incorrectly classified (either a false positive or false negative) will be in the corners.
End of explanation |
773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DV360 Report To Storage
Move existing DV360 report into a Storage bucket.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter DV360 Report To Storage Recipe Parameters
Specify either report name or report id to move a report.
The most recent valid file will be moved to the bucket.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute DV360 Report To Storage
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: DV360 Report To Storage
Move existing DV360 report into a Storage bucket.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'dbm_report_id':'', # DV360 report ID given in UI, not needed if name used.
'auth_write':'service', # Credentials used for writing data.
'dbm_report_name':'', # Name of report, not needed if ID used.
'dbm_bucket':'', # Google cloud bucket.
'dbm_path':'', # Path and filename to write to.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter DV360 Report To Storage Recipe Parameters
Specify either report name or report id to move a report.
The most recent valid file will be moved to the bucket.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dbm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'report_id':{'field':{'name':'dbm_report_id','kind':'integer','order':1,'default':'','description':'DV360 report ID given in UI, not needed if name used.'}},
'name':{'field':{'name':'dbm_report_name','kind':'string','order':2,'default':'','description':'Name of report, not needed if ID used.'}}
},
'out':{
'storage':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'bucket':{'field':{'name':'dbm_bucket','kind':'string','order':3,'default':'','description':'Google cloud bucket.'}},
'path':{'field':{'name':'dbm_path','kind':'string','order':4,'default':'','description':'Path and filename to write to.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute DV360 Report To Storage
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an ARIMA Model for a Financial Dataset
In this notebook, you will build an ARIMA model for AAPL stock closing prices. The lab objectives are
Step1: Import data from Google Clod Storage
In this section we'll read some ten years' worth of AAPL stock data into a Pandas dataframe. We want to modify the dataframe such that it represents a time series. This is achieved by setting the date as the index.
Step2: Prepare data for ARIMA
The first step in our preparation is to resample the data such that stock closing prices are aggregated on a weekly basis.
Step3: Let's create a column for weekly returns. Take the log to of the returns to normalize large fluctuations.
Step4: Test for stationarity of the udiff series
Time series are stationary if they do not contain trends or seasonal swings. The Dickey-Fuller test can be used to test for stationarity.
Step5: With a p-value < 0.05, we can reject the null hypotehsis. This data set is stationary.
ACF and PACF Charts
Making autocorrelation and partial autocorrelation charts help us choose hyperparameters for the ARIMA model.
The ACF gives us a measure of how much each "y" value is correlated to the previous n "y" values prior.
The PACF is the partial correlation function gives us (a sample of) the amount of correlation between two "y" values separated by n lags excluding the impact of all the "y" values in between them.
Step6: The table below summarizes the patterns of the ACF and PACF.
<img src="../imgs/How_to_Read_PACF_ACF.jpg" alt="drawing" width="300" height="300"/>
The above chart shows that reading PACF gives us a lag "p" = 3 and reading ACF gives us a lag "q" of 1. Let's Use Statsmodel's ARMA with those parameters to build a model. The way to evaluate the model is to look at AIC - see if it reduces or increases. The lower the AIC (i.e. the more negative it is), the better the model.
Build ARIMA Model
Since we differenced the weekly closing prices, we technically only need to build an ARMA model. The data has already been integrated and is stationary.
Step7: Our model doesn't do a good job predicting variance in the original data (peaks and valleys).
Step8: Let's make a forecast 2 weeks ahead | Python Code:
!pip install --user statsmodels
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime
%config InlineBackend.figure_format = 'retina'
Explanation: Building an ARIMA Model for a Financial Dataset
In this notebook, you will build an ARIMA model for AAPL stock closing prices. The lab objectives are:
Pull data from Google Cloud Storage into a Pandas dataframe
Learn how to prepare raw stock closing data for an ARIMA model
Apply the Dickey-Fuller test
Build an ARIMA model using the statsmodels library
Make sure you restart the Python kernel after executing the pip install command below! After you restart the kernel you don't have to execute the command again.
End of explanation
df = pd.read_csv('gs://cloud-training/ai4f/AAPL10Y.csv')
df['date'] = pd.to_datetime(df['date'])
df.sort_values('date', inplace=True)
df.set_index('date', inplace=True)
print(df.shape)
df.head()
Explanation: Import data from Google Clod Storage
In this section we'll read some ten years' worth of AAPL stock data into a Pandas dataframe. We want to modify the dataframe such that it represents a time series. This is achieved by setting the date as the index.
End of explanation
df_week = df.resample('w').mean()
df_week = df_week[['close']]
df_week.head()
Explanation: Prepare data for ARIMA
The first step in our preparation is to resample the data such that stock closing prices are aggregated on a weekly basis.
End of explanation
df_week['weekly_ret'] = np.log(df_week['close']).diff()
df_week.head()
# drop null rows
df_week.dropna(inplace=True)
df_week.weekly_ret.plot(kind='line', figsize=(12, 6));
udiff = df_week.drop(['close'], axis=1)
udiff.head()
Explanation: Let's create a column for weekly returns. Take the log to of the returns to normalize large fluctuations.
End of explanation
import statsmodels.api as sm
from statsmodels.tsa.stattools import adfuller
rolmean = udiff.rolling(20).mean()
rolstd = udiff.rolling(20).std()
plt.figure(figsize=(12, 6))
orig = plt.plot(udiff, color='blue', label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std Deviation')
plt.title('Rolling Mean & Standard Deviation')
plt.legend(loc='best')
plt.show(block=False)
# Perform Dickey-Fuller test
dftest = sm.tsa.adfuller(udiff.weekly_ret, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used'])
for key, value in dftest[4].items():
dfoutput['Critical Value ({0})'.format(key)] = value
dfoutput
Explanation: Test for stationarity of the udiff series
Time series are stationary if they do not contain trends or seasonal swings. The Dickey-Fuller test can be used to test for stationarity.
End of explanation
from statsmodels.graphics.tsaplots import plot_acf
# the autocorrelation chart provides just the correlation at increasing lags
fig, ax = plt.subplots(figsize=(12,5))
plot_acf(udiff.values, lags=10, ax=ax)
plt.show()
from statsmodels.graphics.tsaplots import plot_pacf
fig, ax = plt.subplots(figsize=(12,5))
plot_pacf(udiff.values, lags=10, ax=ax)
plt.show()
Explanation: With a p-value < 0.05, we can reject the null hypotehsis. This data set is stationary.
ACF and PACF Charts
Making autocorrelation and partial autocorrelation charts help us choose hyperparameters for the ARIMA model.
The ACF gives us a measure of how much each "y" value is correlated to the previous n "y" values prior.
The PACF is the partial correlation function gives us (a sample of) the amount of correlation between two "y" values separated by n lags excluding the impact of all the "y" values in between them.
End of explanation
from statsmodels.tsa.arima.model import ARIMA
# Notice that you have to use udiff - the differenced data rather than the original data.
ar1 = ARIMA(udiff.values, order = (3, 0,1)).fit()
ar1.summary()
Explanation: The table below summarizes the patterns of the ACF and PACF.
<img src="../imgs/How_to_Read_PACF_ACF.jpg" alt="drawing" width="300" height="300"/>
The above chart shows that reading PACF gives us a lag "p" = 3 and reading ACF gives us a lag "q" of 1. Let's Use Statsmodel's ARMA with those parameters to build a model. The way to evaluate the model is to look at AIC - see if it reduces or increases. The lower the AIC (i.e. the more negative it is), the better the model.
Build ARIMA Model
Since we differenced the weekly closing prices, we technically only need to build an ARMA model. The data has already been integrated and is stationary.
End of explanation
plt.figure(figsize=(12, 8))
plt.plot(udiff.values, color='blue')
preds = ar1.fittedvalues
plt.plot(preds, color='red')
plt.show()
Explanation: Our model doesn't do a good job predicting variance in the original data (peaks and valleys).
End of explanation
steps = 2
forecast = ar1.forecast(steps=steps)
plt.figure(figsize=(12, 8))
plt.plot(udiff.values, color='blue')
preds = ar1.fittedvalues
plt.plot(preds, color='red')
plt.plot(pd.DataFrame(np.array([preds[-1],forecast[0]]).T,index=range(len(udiff.values)+1, len(udiff.values)+3)), color='green')
plt.plot(pd.DataFrame(forecast,index=range(len(udiff.values)+1, len(udiff.values)+1+steps)), color='green')
plt.title('Display the predictions with the ARIMA model')
plt.show()
Explanation: Let's make a forecast 2 weeks ahead:
End of explanation |
775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
xarray = np.linspace(-1.0,1.0,size)
yarray = np.array([m*x + b + ((1/np.sqrt(2*np.pi*sigma**2))*np.exp(-(x**2)/2*sigma**2)) for x in xarray])
return xarray, yarray
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
# YOUR CODE HERE
raise NotImplementedError()
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TFX pipeline example - Chicago Taxi tips prediction
Overview
Tensorflow Extended (TFX) is a Google-production-scale machine
learning platform based on TensorFlow. It provides a configuration framework to express ML pipelines
consisting of TFX components, which brings the user large-scale ML task orchestration, artifact lineage, as well as the power of various TFX libraries. Kubeflow Pipelines can be used as the orchestrator supporting the
execution of a TFX pipeline.
This sample demonstrates how to author a ML pipeline in TFX and run it on a KFP deployment.
Permission
This pipeline requires Google Cloud Storage permission to run.
If KFP was deployed through K8S marketplace, please make sure "Allow access to the following Cloud APIs" is checked when creating the cluster. <img src="check_permission.png">
Otherwise, follow instructions in the guideline to guarantee at least, that the service account has storage.admin role.
Step1: Note
Step2: In this example we'll need TFX SDK later than 0.21 to leverage the RuntimeParameter feature.
RuntimeParameter in TFX DSL
Currently, TFX DSL only supports parameterizing field in the PARAMETERS section of ComponentSpec, see here. This prevents runtime-parameterizing the pipeline topology. Also, if the declared type of the field is a protobuf, the user needs to pass in a dictionary with exactly the same names for each field, and specify one or more value as RuntimeParameter objects. In other word, the dictionary should be able to be passed in to ParseDict() method and produce the correct pb message.
Step3: TFX Components
Please refer to the official guide for the detailed explanation and purpose of each TFX component. | Python Code:
!python3 -m pip install pip --upgrade --quiet --user
!python3 -m pip install kfp --upgrade --quiet --user
!python3 -m pip install tfx==0.21.2 --quiet --user
Explanation: TFX pipeline example - Chicago Taxi tips prediction
Overview
Tensorflow Extended (TFX) is a Google-production-scale machine
learning platform based on TensorFlow. It provides a configuration framework to express ML pipelines
consisting of TFX components, which brings the user large-scale ML task orchestration, artifact lineage, as well as the power of various TFX libraries. Kubeflow Pipelines can be used as the orchestrator supporting the
execution of a TFX pipeline.
This sample demonstrates how to author a ML pipeline in TFX and run it on a KFP deployment.
Permission
This pipeline requires Google Cloud Storage permission to run.
If KFP was deployed through K8S marketplace, please make sure "Allow access to the following Cloud APIs" is checked when creating the cluster. <img src="check_permission.png">
Otherwise, follow instructions in the guideline to guarantee at least, that the service account has storage.admin role.
End of explanation
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
Explanation: Note: if you're warned by
WARNING: The script {LIBRARY_NAME} is installed in '/home/jupyter/.local/bin' which is not on PATH.
You might need to fix by running the next cell and restart the kernel.
End of explanation
import os
from typing import Text
import kfp
import tensorflow_model_analysis as tfma
from tfx.components import Evaluator
from tfx.components import CsvExampleGen
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.orchestration import data_types
from tfx.orchestration import pipeline
from tfx.orchestration.kubeflow import kubeflow_dag_runner
from tfx.proto import pusher_pb2
from tfx.utils.dsl_utils import external_input
# In TFX MLMD schema, pipeline name is used as the unique id of each pipeline.
# Assigning workflow ID as part of pipeline name allows the user to bypass
# some schema checks which are redundant for experimental pipelines.
pipeline_name = 'taxi_pipeline_with_parameters'
# Path of pipeline data root, should be a GCS path.
# Note that when running on KFP, the pipeline root is always a runtime parameter.
# The value specified here will be its default.
pipeline_root = os.path.join('gs://{{kfp-default-bucket}}', 'tfx_taxi_simple',
kfp.dsl.RUN_ID_PLACEHOLDER)
# Location of input data, should be a GCS path under which there is a csv file.
data_root_param = data_types.RuntimeParameter(
name='data-root',
default='gs://ml-pipeline-playground/tfx_taxi_simple/data',
ptype=Text,
)
# Path to the module file, GCS path.
# Module file is one of the recommended way to provide customized logic for component
# includeing Trainer and Transformer.
# See https://github.com/tensorflow/tfx/blob/93ea0b4eda5a6000a07a1e93d93a26441094b6f5/tfx/components/trainer/component.py#L38
taxi_module_file_param = data_types.RuntimeParameter(
name='module-file',
default='gs://ml-pipeline-playground/tfx_taxi_simple/modules/taxi_utils.py',
ptype=Text,
)
# Number of epochs in training.
train_steps = data_types.RuntimeParameter(
name='train-steps',
default=10,
ptype=int,
)
# Number of epochs in evaluation.
eval_steps = data_types.RuntimeParameter(
name='eval-steps',
default=5,
ptype=int,
)
Explanation: In this example we'll need TFX SDK later than 0.21 to leverage the RuntimeParameter feature.
RuntimeParameter in TFX DSL
Currently, TFX DSL only supports parameterizing field in the PARAMETERS section of ComponentSpec, see here. This prevents runtime-parameterizing the pipeline topology. Also, if the declared type of the field is a protobuf, the user needs to pass in a dictionary with exactly the same names for each field, and specify one or more value as RuntimeParameter objects. In other word, the dictionary should be able to be passed in to ParseDict() method and produce the correct pb message.
End of explanation
# The input data location is parameterized by _data_root_param
examples = external_input(data_root_param)
example_gen = CsvExampleGen(input=examples)
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
infer_schema = SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False)
validate_stats = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=infer_schema.outputs['schema'])
# The module file used in Transform and Trainer component is paramterized by
# _taxi_module_file_param.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=taxi_module_file_param)
# The numbers of steps in train_args are specified as RuntimeParameter with
# name 'train-steps' and 'eval-steps', respectively.
trainer = Trainer(
module_file=taxi_module_file_param,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args={'num_steps': train_steps},
eval_args={'num_steps': eval_steps})
# Set the TFMA config for Model Evaluation and Validation.
eval_config = tfma.EvalConfig(
model_specs=[
# Using signature 'eval' implies the use of an EvalSavedModel. To use
# a serving model remove the signature to defaults to 'serving_default'
# and add a label_key.
tfma.ModelSpec(signature_name='eval')
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
metrics=[
tfma.MetricConfig(class_name='ExampleCount')
],
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
thresholds = {
'binary_accuracy': tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))
}
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(feature_keys=['trip_start_hour'])
])
# The name of slicing column is specified as a RuntimeParameter.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=os.path.join(
str(pipeline.ROOT_PARAMETER), 'model_serving'))))
# Create the DSL pipeline object.
# This pipeline obj carries the business logic of the pipeline, but no runner-specific information
# was included.
dsl_pipeline = pipeline.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=[
example_gen, statistics_gen, infer_schema, validate_stats, transform,
trainer, model_analyzer, model_validator, pusher
],
enable_cache=True,
beam_pipeline_args=['--direct_num_workers=%d' % 0],
)
# Specify a TFX docker image. For the full list of tags please see:
# https://hub.docker.com/r/tensorflow/tfx/tags
tfx_image = 'gcr.io/tfx-oss-public/tfx:0.21.2'
config = kubeflow_dag_runner.KubeflowDagRunnerConfig(
kubeflow_metadata_config=kubeflow_dag_runner
.get_default_kubeflow_metadata_config(),
tfx_image=tfx_image)
kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(config=config)
# KubeflowDagRunner compiles the DSL pipeline object into KFP pipeline package.
# By default it is named <pipeline_name>.tar.gz
kfp_runner.run(dsl_pipeline)
run_result = kfp.Client(
host='1234567abcde-dot-us-central2.pipelines.googleusercontent.com' # Put your KFP endpoint here
).create_run_from_pipeline_package(
pipeline_name + '.tar.gz',
arguments={
# Uncomment following lines in order to use custom GCS bucket/module file/training data.
# 'pipeline-root': 'gs://<your-gcs-bucket>/tfx_taxi_simple/' + kfp.dsl.RUN_ID_PLACEHOLDER,
# 'module-file': '<gcs path to the module file>', # delete this line to use default module file.
# 'data-root': '<gcs path to the data>' # delete this line to use default data.
})
Explanation: TFX Components
Please refer to the official guide for the detailed explanation and purpose of each TFX component.
End of explanation |
777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python 문자열 인코딩
문자와 인코딩
문자의 구성
바이트 열 Byte Sequence
Step1: 유니코드 리터럴(Literal)
따옴표 앞에 u자를 붙이면 unicode 문자열로 인식
내부적으로 유니코드 포인트로 저장
Step2: 유니코드 인코딩(Encoding) / 디코딩(Decoding)
encode
unicode 타입의 메소드
unicode -> string (byte sequence)
decode
str 타입의 메소드
str -> unicode
Step3: str에 encode 메소드를 적용하면 또는 unicode에 decode 메소드를 적용하면?
Step4: str에 encode 메소드를 적용 | Python Code:
c = "a"
c
print(c)
x = "가"
x
print(x)
print(x.__repr__())
x = ["가"]
print(x)
x = "가"
len(x)
x = "ABC"
y = "가나다"
print(len(x), len(y))
print(x[0], x[1], x[2])
print(y[0], y[1], y[2])
print(y[0], y[1], y[2], y[3])
Explanation: Python 문자열 인코딩
문자와 인코딩
문자의 구성
바이트 열 Byte Sequence: 컴퓨터에 저장되는 자료. 각 글자에 바이트 열을 지정
글리프 Glyph: 눈에 보이는 그림
http://www.asciitable.com/
http://www.kreativekorp.com/charset/encoding.php?name=CP949
코드 포인트 Code Point: 각 글자에 바이트 열과는 독립적인 숫자를 지정 (유니코드)
인코딩 (방식)
바이트 열을 지정하는 방식
기본 Ascii 인코딩
한글 인코딩
euc-kr
cp949
utf-8
참고
http://d2.naver.com/helloworld/19187
http://d2.naver.com/helloworld/76650
Python 2 문자열
string 타입 (기본)
컴퓨터 환경에서 지정한 인코딩을 사용한 byte string
unicode 타입
유니코드 코드 포인트(Unicode Code Point)를 사용한 내부 저장
string(byte string)과의 변환을 위해 encode(인코딩)/decode(디코딩) 명령 사용
Python 3에서는 unicode 타입이 기본
Python의 문자열 표시
__repr__()
그냥 변수이름을 쳤을 때 나오는 표시
다른 객체의 원소인 경우
아스키 테이블로 표시할 수 없는 문자는 string 포맷으로 표시
print() 명령
가능한 글리프(폰트)를 찾아서 출력
End of explanation
y = u"가"
y
print(y)
y = u"가나다"
print(y[0], y[1], y[2])
Explanation: 유니코드 리터럴(Literal)
따옴표 앞에 u자를 붙이면 unicode 문자열로 인식
내부적으로 유니코드 포인트로 저장
End of explanation
print(type(y))
z1 = y.encode("cp949")
print(type(z1))
print(z1)
print(type(y))
z2 = y.encode("utf-8")
print(type(z2))
print(z2)
print(type(z1))
y1 = z1.decode("cp949")
print(type(y1))
print(y1)
print(type(z2))
y2 = z2.decode("utf-8")
print(type(y2))
print(y2)
Explanation: 유니코드 인코딩(Encoding) / 디코딩(Decoding)
encode
unicode 타입의 메소드
unicode -> string (byte sequence)
decode
str 타입의 메소드
str -> unicode
End of explanation
"가".encode("utf-8")
unicode("가", "ascii").encode("utf-8")
u"가".decode("utf-8")
u"가".encode("ascii").decode("utf-8")
Explanation: str에 encode 메소드를 적용하면 또는 unicode에 decode 메소드를 적용하면?
End of explanation
u"가".encode("utf-8"), u"가".encode("cp949"), "가"
import sys
print(sys.getdefaultencoding())
print(sys.stdin.encoding)
print(sys.stdout.encoding)
import locale
print(locale.getpreferredencoding())
Explanation: str에 encode 메소드를 적용:
내부적으로 유니코드로 변환 시도
unicode에 decode 메소드를 적용:
바이트열이 스트링이라고 가정해 버린다.
디폴트 인코딩
End of explanation |
778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning with TensorFlow
Credits
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step4: 直接 Load
Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step5: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step6: Problem 3
Convince yourself that the data is still good after shuffling!
Step7: Problem 4
Another check
Step8: Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.
Also create a validation dataset for hyperparameter tuning.
Step9: Finally, let's save the data for later reuse
Step10: Load Dataset
Step11: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions | Python Code:
print 'xxxx'
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import matplotlib.pyplot as plt
import numpy as np
import os
import tarfile
import urllib
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
import cPickle as pickle
Explanation: Deep Learning with TensorFlow
Credits: Forked from TensorFlow by Google
Setup
Refer to the setup instructions.
Exercise 1
The objective of this exercise is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urllib.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print 'Found and verified', filename
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
def extract(filename):
# tar = tarfile.open(filename)
# tar.extractall()
# tar.close()
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
data_folders = [os.path.join(root, d) for d in sorted(os.listdir(root))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_folders, len(data_folders)))
print data_folders
return data_folders
train_folders = extract(train_filename)
print train_folders
test_folders = extract(test_filename)
print test_folders
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
parent_folder = './'
all_train_filles = []
for i in range(10):
folder = train_folders[i]
flist =os.listdir(parent_folder+folder)
for fname in flist :
all_train_filles.append((i,parent_folder+folder+'/'+fname))
from random import shuffle
print all_train_filles[:10]
shuffle(all_train_filles)
print all_train_filles[:10]
def getNext():
import numpy as np
s = 100
batch_szie = 100
image_size = 28
partial_data = all_train_filles[s:s+batch_szie]
lables = np.ndarray(shape=(batch_szie), dtype=np.int32)
dataset = np.ndarray(
shape=(batch_szie, image_size, image_size), dtype=np.float32)
image_index = 0
for label,fname in partial_data:
image_data = (ndimage.imread(fname).astype(float) -
128 / 2) / 128
# print image_data
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index,:,:] = image_data
lables[image_index] = label
image_index += 1
return lables,dataset
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load(data_folders, min_num_images, max_num_images):
dataset = np.ndarray(
shape=(max_num_images, image_size, image_size), dtype=np.float32)
labels = np.ndarray(shape=(max_num_images), dtype=np.int32)
label_index = 0
image_index = 0
for folder in data_folders:
print folder
for image in os.listdir(folder):
if image_index >= max_num_images:
# raise Exception('More images than expected: %d >= %d' % (
# num_images, max_num_images))
print "Too Many Images",folder
break
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
labels[image_index] = label_index
image_index += 1
except IOError as e:
print 'Could not read:', image_file, ':', e, '- it\'s ok, skipping.'
label_index += 1
num_images = image_index
dataset = dataset[0:num_images, :, :]
labels = labels[0:num_images]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' % (
num_images, min_num_images))
print 'Full dataset tensor:', dataset.shape
print 'Mean:', np.mean(dataset)
print 'Standard deviation:', np.std(dataset)
print 'Labels:', labels.shape
return dataset, labels
train_dataset, train_labels = load(train_folders, 450000, 550000)
test_dataset, test_labels = load(test_folders, 18000, 20000)
Explanation: 直接 Load
Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
Now let's load the data in a more manageable format.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. The labels will be stored into a separate array of integers 0 through 9.
A few images might not be readable, we'll just skip them.
End of explanation
np.random.seed(133)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
train_labels[:100]
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
print 'train_dataset'
print train_dataset.shape
print train_dataset.mean()
print train_dataset.std()
print 'test_dataset'
print test_dataset.shape
print test_dataset.mean()
print test_dataset.std()
Explanation: Problem 3
Convince yourself that the data is still good after shuffling!
End of explanation
t_lable_count =[0]*10
for i in train_labels:
t_lable_count[i] +=1
t_lable_count
test_lable_count =[0]*10
for i in test_labels:
test_lable_count[i] +=1
test_lable_count
Explanation: Problem 4
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
train_size = 200000
valid_size = 10000
valid_dataset = train_dataset[:valid_size,:,:]
valid_labels = train_labels[:valid_size]
train_dataset = train_dataset[valid_size:valid_size+train_size,:,:]
train_labels = train_labels[valid_size:valid_size+train_size]
print 'Training', train_dataset.shape, train_labels.shape
print 'Validation', valid_dataset.shape, valid_labels.shape
Explanation: Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.
Also create a validation dataset for hyperparameter tuning.
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Compressed pickle size:', statinfo.st_size
Explanation: Finally, let's save the data for later reuse:
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
Explanation: Load Dataset
End of explanation
import tensorflow as tf
x = tf.placeholder(tf.float32,shape=[None,28*28],name='Input_X')
y = tf.placeholder(tf.float32,shape=[None,10],name='Input_Y')
W = tf.Variable(tf.truncated_normal(shape=[28*28,10]))
b = tf.Variable(tf.truncated_normal(shape=[10]))
xw = tf.matmul(x,W)
r = xw + b
cost = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(r,y))
a = tf.nn.softmax(r)
# cost_array = y*tf.log(a)
# cost = -1*tf.reduce_sum(cost_array )/1000
op = tf.train.GradientDescentOptimizer(0.5).minimize(cost)
init = tf.initialize_all_variables()
session = tf.Session()
session.run(init)
batch_size =128
batch_list = []
for i in range(0,train_labels[:1024].shape[0]/batch_size):
# print (i*batch_size,i*batch_size+batch_size)
batch_list.append((i*batch_size,(i+1)*batch_size))
# batch_list.append(((i+1)*batch_size,train_labels.shape[0]-1))
# for start,end in batch_list:
start,end = batch_list[0]
print start,end
input_images = np.reshape(train_dataset[start:end],[batch_size,28*28])
output_labes = np.zeros([batch_size,10],dtype=np.int)
output_labes[100][9]
for epcho in range(10000):
for start,end in batch_list:
for index,value in enumerate(train_labels[start:end]):
output_labes[index][value] = 1
session.run(op,feed_dict={x:input_images,y:output_labes})
if epcho % 100 == 0 :
print "epcho",epcho,
print session.run(cost,feed_dict={x:input_images,y:output_labes})/batch_size,
test_image = np.reshape(train_dataset[:1024],[-1,28*28])
prediction = tf.arg_max(a,1)
pr = session.run(prediction,feed_dict={x:test_image,})
print np.sum(np.equal(pr,train_labels[:1024])*1)*1./test_image.shape[0]
test_image = np.reshape(train_dataset,[-1,28*28])
prediction = tf.arg_max(a,1)
pr = session.run(prediction,feed_dict={x:test_image,})
np.sum(np.equal(pr,train_labels)*1)*1./test_image.shape[0]
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
import numpy
tmp2 = train_dataset[2222].reshape((28,28))
plt.imshow(tmp2, cmap = cm.Greys)
plt.show()
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent exercises.
Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation |
779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Starman
This notebook integrates the orbit of Elon Musk's Tesla and Starman.
Step1: We start by querying NASA Horizons for the Solar System planets around the time of the orbit injection.
Step2: We stored the simulation to a binary file. This allows us to reload it quickly to play around with things without having to query NASA Horizons too often.
Next up, we add the tesla to the simulation. As the orbital parameters are also in NASA Horizons, we can simply add it (and ignore the fact that the particle is set to no mass)
Step3: Let's calculate the characteristic energy.
Step4: That seems about right! So let's look at the orbit. It starts at Earth's orbit, crosses that of Mars and then enters the asteroid belt.
Step5: And then integrate it forward in time. Here, we use the hybrid integrator MERCURIUS. You can experiment with other integrators which might be faster, but since this is an eccentric orbit, you might see many close encounters, so you either need a non-symplectic integrator such as IAS15 or a hybrid integrator such as MERCURIUS.
Step6: Let's plot the orbital parameters!
Step7: To check the sensitivity of the integrations, let us perturb the initial orbit by a small factor equal to the confidence interval posted by Bill Gray (https
Step8: Let's integrate this...
Step9: When plotting the semi-major axis and eccentricity of all orbits, note that their kicks are correlated. This is because they are all due to close encounters with the Earth. This fast divergence means that we cannot predict the trajectory for more than a hundred years without knowing the precise initial conditions and all the non-gravitational effects that might be acting on a car in space. | Python Code:
import rebound
import numpy as np
%matplotlib inline
Explanation: Starman
This notebook integrates the orbit of Elon Musk's Tesla and Starman.
End of explanation
sim = rebound.Simulation()
sim.add(["Sun","Mercury","Venus","Earth","Mars","Jupiter","Saturn","Uranus","Neptune"],date="2018-02-10 00:00")
sim.save("ss.bin")
Explanation: We start by querying NASA Horizons for the Solar System planets around the time of the orbit injection.
End of explanation
sim = rebound.Simulation("ss.bin")
sim.add("SpaceX Roadster")
Explanation: We stored the simulation to a binary file. This allows us to reload it quickly to play around with things without having to query NASA Horizons too often.
Next up, we add the tesla to the simulation. As the orbital parameters are also in NASA Horizons, we can simply add it (and ignore the fact that the particle is set to no mass):
End of explanation
tesla = sim.particles[-1]
earth = sim.particles[3]
r=np.linalg.norm(np.array(tesla.xyz) - np.array(earth.xyz))
v=np.linalg.norm(np.array(tesla.vxyz) - np.array(earth.vxyz))
energy = 0.5*v*v-earth.m/r
c3 = 2.*energy*887.40652 # from units where G=1, length=1AU to km and s
print("c3 = %f (km^2/s^2)" % c3)
Explanation: Let's calculate the characteristic energy.
End of explanation
rebound.OrbitPlot(sim,slices=0.3,color=True,xlim=[-3,3],ylim=[-3,3]);
Explanation: That seems about right! So let's look at the orbit. It starts at Earth's orbit, crosses that of Mars and then enters the asteroid belt.
End of explanation
# integrate
sim.dt = sim.particles[1].P/60. # small fraction of Mercury's period
sim.integrator = "mercurius"
N = 1000
times = np.linspace(0.,2.*np.pi*1e5,N)
a = np.zeros(N)
e = np.zeros(N)
for i,t in enumerate(times):
sim.integrate(t,exact_finish_time=0)
orbit = sim.particles[-1].calculate_orbit(primary=sim.particles[0])
a[i] = orbit.a
e[i] = orbit.e
Explanation: And then integrate it forward in time. Here, we use the hybrid integrator MERCURIUS. You can experiment with other integrators which might be faster, but since this is an eccentric orbit, you might see many close encounters, so you either need a non-symplectic integrator such as IAS15 or a hybrid integrator such as MERCURIUS.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(9,7))
ax = plt.subplot(211)
ax.set_xlim([0,np.max(times)/2./np.pi])
ax.set_xlabel("time [yrs]")
ax.set_ylabel("semi-major axis [AU]")
plt.plot(times/2./np.pi,a)
ax = plt.subplot(212)
ax.set_xlim([0,np.max(times)/2./np.pi])
ax.set_xlabel("time [yrs]")
ax.set_ylabel("eccentricity")
plt.plot(times/2./np.pi,e);
Explanation: Let's plot the orbital parameters!
End of explanation
sim = rebound.Simulation("ss.bin")
Ntesla = 10
for i in range(Ntesla):
sim.add(primary=sim.particles[0],
M=(tesla.orbit.M+0.0013*np.random.normal()) *np.pi/180.,
a=(tesla.orbit.a+0.000273*np.random.normal()),
omega = (tesla.orbit.omega+0.00059*np.random.normal()) *np.pi/180.,
Omega = (tesla.orbit.Omega+0.0007*np.random.normal()) *np.pi/180.,
e = (tesla.orbit.e+0.00015*np.random.normal()),
inc = (tesla.orbit.inc+0.0007*np.random.normal()) *np.pi/180.)
sim.N_active = 9 # Sun + planets
Explanation: To check the sensitivity of the integrations, let us perturb the initial orbit by a small factor equal to the confidence interval posted by Bill Gray (https://projectpluto.com/temp/spacex.htm#elements). Instead of just integrating one particle at a time, we here add 10 test particles. We also switch to the high precision IAS15 integrator to get the most reliable result.
End of explanation
sim.dt = sim.particles[1].P/60. # small fraction of Mercury's period
sim.integrator="ias15"
N = 1000
times = np.linspace(0.,2.*np.pi*1e3,N)
a_log = np.zeros((N,Ntesla))
e_log = np.zeros((N,Ntesla))
for i,t in enumerate(times):
sim.integrate(t,exact_finish_time=0)
for j in range(Ntesla):
orbit = sim.particles[9+j].calculate_orbit(primary=sim.particles[0])
a_log[i][j] = orbit.a
e_log[i][j] = orbit.e
Explanation: Let's integrate this...
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(9,7))
ax = plt.subplot(211)
ax.set_xlim([0,np.max(times)/2./np.pi])
ax.set_xlabel("time [yrs]")
ax.set_ylabel("semi-major axis [AU]")
for j in range(Ntesla):
plt.plot(times/2./np.pi,a_log[:,j])
ax = plt.subplot(212)
ax.set_xlim([0,np.max(times)/2./np.pi])
ax.set_xlabel("time [yrs]")
ax.set_ylabel("eccentricity")
for j in range(Ntesla):
plt.plot(times/2./np.pi,e_log[:,j])
Explanation: When plotting the semi-major axis and eccentricity of all orbits, note that their kicks are correlated. This is because they are all due to close encounters with the Earth. This fast divergence means that we cannot predict the trajectory for more than a hundred years without knowing the precise initial conditions and all the non-gravitational effects that might be acting on a car in space.
End of explanation |
780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
0. random init
for initial centroids
Step1: 1. cluster assignment
http
Step2: 1 epoch cluster assigning
Step3: See the first round clustering result
Step4: 2. calculate new centroid
Step5: putting all together, take1
this is just 1 shot k-means, if the random init pick the bad starting centroids, the final clustering may be very sub-optimal
Step6: calculate the cost
Step7: k-mean with multiple tries of randome init, pick the best one with least cost
Step8: try sklearn kmeans | Python Code:
km.random_init(data2, 3)
Explanation: 0. random init
for initial centroids
End of explanation
init_centroids = km.random_init(data2, 3)
init_centroids
x = np.array([1, 1])
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(x=init_centroids[:, 0], y=init_centroids[:, 1])
for i, node in enumerate(init_centroids):
ax.annotate('{}: ({},{})'.format(i, node[0], node[1]), node)
ax.scatter(x[0], x[1], marker='x', s=200)
km._find_your_cluster(x, init_centroids)
Explanation: 1. cluster assignment
http://stackoverflow.com/questions/14432557/matplotlib-scatter-plot-with-different-text-at-each-data-point
find closest cluster experiment
End of explanation
C = km.assign_cluster(data2, init_centroids)
data_with_c = km.combine_data_C(data2, C)
data_with_c.head()
Explanation: 1 epoch cluster assigning
End of explanation
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
Explanation: See the first round clustering result
End of explanation
km.new_centroids(data2, C)
Explanation: 2. calculate new centroid
End of explanation
final_C, final_centroid, _= km._k_means_iter(data2, 3)
data_with_c = km.combine_data_C(data2, final_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
Explanation: putting all together, take1
this is just 1 shot k-means, if the random init pick the bad starting centroids, the final clustering may be very sub-optimal
End of explanation
km.cost(data2, final_centroid, final_C)
Explanation: calculate the cost
End of explanation
best_C, best_centroids, least_cost = km.k_means(data2, 3)
least_cost
data_with_c = km.combine_data_C(data2, best_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
Explanation: k-mean with multiple tries of randome init, pick the best one with least cost
End of explanation
from sklearn.cluster import KMeans
sk_kmeans = KMeans(n_clusters=3)
sk_kmeans.fit(data2)
sk_C = sk_kmeans.predict(data2)
data_with_c = km.combine_data_C(data2, sk_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
Explanation: try sklearn kmeans
End of explanation |
781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing the non-Markovian Path Analysis Package
Step1: 2D Toy model
Step2: MC simulation
Step3: 1 - Ensemble class (analysis of continuos trajectories)
Stores an esemble (list) of trajectories (np.arrays). The ensemble could have any number of trajectories including no trajectories at all.
Creating an Ensemble
Step4: From a single trajectory
Step5: From a list of trajectories
Step6: Ensembles are iterable objects
Step7: Adding trajectories to the Ensemble
New trajectories can be added to the ensemble as long as there is consistency in the number of variables.
Step8: "Printing" the ensemble
Step9: Defining states and computing MFPTs
The states are considered intervals in the is the class is Ensemble
Step10: Sum of ensembles (ensemble + ensemble)
Step11: Another simple example
Step12: Computing the count matrix and transition matrix
Step13: 2 - PathEnsemble class
Creating a path ensemble object
Step14: From ensemble
Step15: MFPTs
Step16: Count matrix
Step17: 3 - DiscreteEnsemble class
We can generate a discrete ensemble from the same mapping function and we should obtain exaclty the same result
Step18: Count matrix and transition matrix
Step19: Defining states and computing MFPTs
The states are now considered sets, defining the states as follow we should obtain the same results
Step20: Generating a Discrete Ensemble from the transition matrix
Step21: 4 - DiscretePathEnsemble class
Creating the DPE
From Ensemble
Step22: From the transition matrix
Step23: Fundamental sequence
Step24: Plotting paths A -> B
Step25: Plotting Fundamental Sequences A -> B | Python Code:
import sys
sys.path.append("../nmpath/")
from tools_for_notebook0 import *
%matplotlib inline
from mappers import rectilinear_mapper
from ensembles import Ensemble, DiscreteEnsemble, PathEnsemble, DiscretePathEnsemble
Explanation: Testing the non-Markovian Path Analysis Package
End of explanation
plot_traj([],[])
Explanation: 2D Toy model
End of explanation
#Generating MC trajectories
mc_traj1_2d = mc_simulation2D(100000)
mc_traj2_2d = mc_simulation2D(10000)
Explanation: MC simulation
End of explanation
# Empty ensemble with no trajectories
my_ensemble = Ensemble()
Explanation: 1 - Ensemble class (analysis of continuos trajectories)
Stores an esemble (list) of trajectories (np.arrays). The ensemble could have any number of trajectories including no trajectories at all.
Creating an Ensemble
End of explanation
# from a single trajectory
my_ensemble = Ensemble([mc_traj1_2d],verbose=True)
Explanation: From a single trajectory:
End of explanation
# We have to set list_of_trajs = True
my_list_of_trajs = [mc_traj1_2d, mc_traj2_2d]
my_ensemble = Ensemble(my_list_of_trajs, verbose=True)
Explanation: From a list of trajectories:
End of explanation
for traj in my_ensemble:
print(len(traj))
Explanation: Ensembles are iterable objects
End of explanation
my_ensemble = Ensemble(verbose=True)
my_ensemble.add_trajectory(mc_traj1_2d)
my_ensemble.add_trajectory(mc_traj2_2d)
Explanation: Adding trajectories to the Ensemble
New trajectories can be added to the ensemble as long as there is consistency in the number of variables.
End of explanation
print(my_ensemble)
Explanation: "Printing" the ensemble
End of explanation
stateA = [[0,pi],[0,pi]]
stateB = [[5*pi,6*pi],[5*pi,6*pi]]
my_ensemble.empirical_mfpts(stateA, stateB)
Explanation: Defining states and computing MFPTs
The states are considered intervals in the is the class is Ensemble
End of explanation
seq1 = mc_simulation2D(20000)
seq2 = mc_simulation2D(20000)
my_e1 = Ensemble([seq1])
my_e2 = Ensemble([seq2])
ensemble1 = my_e1 + my_e2
Explanation: Sum of ensembles (ensemble + ensemble)
End of explanation
e1 = Ensemble([[1.,2.,3.,4.]],verbose=True)
e2 = Ensemble([[2,3,4,5]])
e3 = Ensemble([[2,1,1,4]])
my_ensembles = [e1, e2, e3]
ensemble_tot = Ensemble([])
for ens in my_ensembles:
ensemble_tot += ens
#ensemble_tot.mfpts([1,1],[4,4])
Explanation: Another simple example
End of explanation
n_states = N**2
bin_bounds = [[i*pi for i in range(7)],[i*pi for i in range(7)]]
C1 = my_ensemble._count_matrix(n_states, map_function=rectilinear_mapper(bin_bounds))
print(C1)
K1 = my_ensemble._mle_transition_matrix(n_states, map_function=rectilinear_mapper(bin_bounds))
print(K1)
Explanation: Computing the count matrix and transition matrix
End of explanation
#p_ensemble = PathEnsemble()
Explanation: 2 - PathEnsemble class
Creating a path ensemble object
End of explanation
p_ensemble = PathEnsemble.from_ensemble(my_ensemble, stateA, stateB)
print(p_ensemble)
Explanation: From ensemble
End of explanation
p_ensemble.empirical_mfpts(stateA, stateB)
Explanation: MFPTs
End of explanation
print(p_ensemble._count_matrix(n_states, mapping_function2D))
#clusters = p_ensemble.cluster(distance_metric = 'RMSD', n_cluster=10, method = 'K-means')
Explanation: Count matrix
End of explanation
d_ens = DiscreteEnsemble.from_ensemble(my_ensemble, mapping_function2D)
print(d_ens)
Explanation: 3 - DiscreteEnsemble class
We can generate a discrete ensemble from the same mapping function and we should obtain exaclty the same result:
End of explanation
C2 = d_ens._count_matrix(n_states)
print(C2)
K2= d_ens._mle_transition_matrix(n_states)
print(K2)
Explanation: Count matrix and transition matrix
End of explanation
stateA = [0]
stateB = [N*N-1]
d_ens.empirical_mfpts(stateA, stateB)
Explanation: Defining states and computing MFPTs
The states are now considered sets, defining the states as follow we should obtain the same results
End of explanation
d_ens2 = DiscreteEnsemble.from_transition_matrix(K2, sim_length = 100000)
#d_ens2.mfpts(stateA,stateB)
Explanation: Generating a Discrete Ensemble from the transition matrix
End of explanation
dpathEnsemble = DiscretePathEnsemble.from_ensemble(my_ensemble, stateA, stateB, mapping_function2D)
print(dpathEnsemble)
#MFPT from the transition matrix
dpathEnsemble.nm_mfpt(ini_probs = None, n_states = N*N)
Explanation: 4 - DiscretePathEnsemble class
Creating the DPE
From Ensemble
End of explanation
n_paths = 200
dpathEnsemble2 = DiscretePathEnsemble.from_transition_matrix\
(K2, stateA = stateA, stateB = stateB, n_paths = n_paths,ini_pops = [1])
print(dpathEnsemble2)
Explanation: From the transition matrix
End of explanation
FSs, weights, count = dpathEnsemble2.weighted_fundamental_sequences(K2)
size = len(FSs)
paths = dpathEnsemble2.trajectories
print(count)
Explanation: Fundamental sequence
End of explanation
discrete = [True for i in range(size)]
plot_traj([[paths[i],[]] for i in range(size)] , discrete, \
line_width=0.2, std=0.5, color='k', title = '{} paths A->B'.format(n_paths))
Explanation: Plotting paths A -> B
End of explanation
plot_traj([[FSs[i],[]] for i in range(size)] ,discrete, \
line_width=0.5, std=0.2, color='k', title = '{} FSs A->B'.format(n_paths))
lw = [weights[i]*100 for i in range(size)]
#np.random.seed(12)
plot_traj([[FSs[i],[]] for i in range(size)] ,discrete=[True for i in range(size)],\
line_width = lw,std = 0.002, alpha=0.25)
Explanation: Plotting Fundamental Sequences A -> B
End of explanation |
782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Photometric Inference
This notebook outlines the basics of how to conduct basic redshift inference (i.e. a set of intrinsic labels) using photometry (i.e. a set of observed features).
Setup
Step1: Data
For our proof-of-concept tests, we will use the mock SDSS data we previously generated.
Step2: Inference with Noisy Redshifts
For every observed galaxy $g \in \mathbf{g}$ out of $N_\mathbf{g}$ galaxies, let's assume we have an associated noisy redshift estimate $\hat{z}g$ with PDF $P(\hat{z}_g | z)$. We are interested in constructing an estimate for the population redshift distribution $N(z|\mathbf{g})$ by projecting our results onto a relevant (possibly noisy) redshift basis $\lbrace \dots, P(\hat{z}_h|z) \equiv K(z|\hat{z}_h), \dots \rbrace$ indexed by $h \in \mathbf{h}$ with $N{\mathbf{h}}$ elements. The use of $K(\cdot|\cdot)$ instead of $P(\cdot|\cdot)$ here is used to suggest the use of an underlying redshift kernel. We will return to this later.
Abusing notation slightly, we can write our likelihood between $g$ and $h$ as
$$ \mathcal{L}(g|h) \equiv P(\hat{z}_g | \hat{z}_h) = \int P(\hat{z}_g | z) K(z | \hat{z}_h) dz $$
where we have marginalized over the true redshift $z$. Note that the likelihood is (by construction) unnormalized so that $\sum_g \mathcal{L}(g|h) \neq 1$.
Combined with a prior over our basis $P(h)$, we can then write the posterior between $h$ and $g$ using Bayes Theorem as
$$ P(h|g) = \frac{\mathcal{L}(g|h)\pi(h)}{\mathcal{Z}_g}
= \frac{\mathcal{L}(g|h)\pi(h)}{\sum_h \mathcal{L}(g|h) \pi(h)} $$
where $\pi(g)$ is the prior and $\mathcal{Z}_g$ is the evidence (i.e. marginal likelihood) of $g$.
We are interested in the number density of observed galaxies as a function of redshift, $N(z|\mathbf{g})$. We can define this as a weighted sum over our redshift basis
$$ N(z|\mathbf{g}) = \sum_h w_h(\mathbf{g}) \, K(z|\hat{z}_h) $$
where $w_h(\mathbf{g})$ are the associated weights. For now, we will take the ansatz that $w_h(\mathbf{g}) = \sum_g P(h|g)$, i.e. that we can estimate $N(z|\mathbf{g})$ by stacking all our galaxy PDFs. This isn't quite correct but is sufficient for our purposes here; we will illustrate how to derive these weights properly in a later notebook.
Inference with Noisy Photometry
Here, we want to do this same exercise over our set of observed $N_{\mathbf{b}}$-dimensional features $\mathbf{F}$ with PDF $P(\hat{\mathbf{F}}|g)$. The only difference from the case above is that we are dealing with observables $P(\hat{\mathbf{F}}|h)$ rather than kernels. Applying Bayes Theorem and emulating our previous example gives us
$$ \mathcal{L}(g|h) \equiv P(\mathbf{F}_g| \mathbf{F}_h)
= \int P(\hat{\mathbf{F}}_g | \mathbf{F}) P(\mathbf{F} | \hat{\mathbf{F}}_h) d\mathbf{F}
= \frac{\int P(\hat{\mathbf{F}}_g | \mathbf{F}) P(\hat{\mathbf{F}}_h | \mathbf{F}) \pi(\mathbf{F}) d\mathbf{F}}{\int P(\hat{\mathbf{F}}_h | \mathbf{F}) \pi(\mathbf{F}) d\mathbf{F}}$$
where we have now introduced $\pi(\mathbf{F})$ to be a $p$-dimensional prior over the true features. For our purposes, we will assume these set of features correspond to a set of observed flux densities $\hat{F}{i,b}$ in a set of $N{\mathbf{b}}$ photometric bands indexed by $b \in \mathbf{b}$.
In practice, $\mathbf{g}$ constitutes a set of unlabeled objects with unknown properties while $\mathbf{h}$ is a set of labeled objects with known properties. Labeled objects might constitute a particular "training set" (in machine learning-based applications) or a set of models (in template fitting-based applications).
We are interested in inferring the redshift PDF $P(z|g)$ for our observed object $g$ based on its observed photometry $\hat{\mathbf{F}}_g$. Given our labeled objects $\mathbf{h}$ with corresponding redshift kernels $K(z|h)$, this is just
$$
P(z|g) = \sum_h K(z|h)P(h|g) = \frac{\sum_h K(z|h)\mathcal{L}(g|h)\pi(h)}{\sum_h \mathcal{L}(g|h)\pi(h)}
$$
which corresponds to a posterior-weighted mixture of the $K(z|h)$ redshift kernels.
The "Big Data" Approximation
It is important to note that we've made a pretty big assumption here
Step3: Photometric Likelihoods
For most galaxies, we can take $P(\hat{\mathbf{F}}_i|\mathbf{F})$ to be a multivariate Normal (i.e. Gaussian) distribution such that
$$
P(\hat{\mathbf{F}}i|\mathbf{F}) = \mathcal{N}(\hat{\mathbf{F}}_i|\mathbf{F},\mathbf{C}_i)
\equiv \frac{\exp\left[-\frac{1}{2}||\hat{\mathbf{F}}_i-\mathbf{F}||{\mathbf{C}_i}^2\right]}{|2\pi\mathbf{C}_i|^{1/2}}
$$
where
$$
||\hat{\mathbf{F}}i-\mathbf{F}||{\mathbf{C}_i}^2 \equiv (\hat{\mathbf{F}}_i-\mathbf{F})^{\rm T}\mathbf{C}_i^{-1}(\hat{\mathbf{F}}_i-\mathbf{F})
$$
is the squared Mahalanobis distance between $\hat{\mathbf{F}}_i$ and $\mathbf{F}$ given covariance matrix $\mathbf{C}_i$ (i.e. the photometric errors), ${\rm T}$ is the transpose operator, and $|\mathbf{C}_i|$ is the determinant of $\mathbf{C}_g$.
While we will use matrix notation for compactness, in practice we will assume all our covariances are diagonal (i.e. the errors are independent) such that
$$
||\hat{\mathbf{F}}g-\mathbf{F}||{\mathbf{C}g}^2 = \sum{b} \frac{(\hat{F}{g,b}-F_b)^2}{\sigma^2{g,b}}
$$
Likelihood
Step7: Note that the log-likelihood function defined in frankenz contains a number of additional options that have been specified above. These will be discussed later.
Step8: As expected, the PDF computed from our noisy photometry is broader than than the noiseless case.
Likelihood
Step9: Finally, it's useful to compare the case where we compute our posteriors directly from our underlying model grid and apply our priors directly. By construction, this should agree with the "true" posterior distribution up to the approximation that for a given template $t$ and redshift $z$ we can take the model to have a magnitude based on $\ell_{\rm ML}$ rather than integrating over the full $\pi(\ell)$ distribution, i.e.
$$
\int \pi(\ell) \mathcal{N}\left(\hat{\mathbf{F}}g | \ell\hat{\mathbf{F}}_h, \mathbf{C}{g}+\ell^2\mathbf{C}{h} \right)\,d\ell \approx \pi(\ell{\rm ML}) \mathcal{L}(g|h, \ell_{\rm ML})
$$
Step10: As expected, the secondary solutions seen in our grid-based likelihoods are suppressed by our prior, which indicates many of these solutions are distinctly unphysical (at least given the original assumptions used when constructing our mock).
In addition, the BPZ posterior computed over our grid of models agrees quite well with the noiseless magnitude-based likelihoods computed over our noiseless samples (i.e. our labeled "training" data). This demonstrates that an utilizing an unbiased, representative training set instead of a grid of models inherently gives access to complex priors that otherwise have to be modeled analytically. In other words, we can take $P(h) = 1$ for all $h \in \mathbf{h}$ since the distribution of our labeled photometric samples probes the underlying $P(z, t, m)$ distribution.
In practice, however, we do not often have access to a fully representative training sample, and often must derive an estimate of $P(\mathbf{h})$ through other means. We will return to this point later.
Population Tests
We now want to see how things look on a larger sample of objects.
Step11: Sidenote
Step12: Note that we've used asinh magnitudes (i.e. "Luptitudes"; Lupton et al. 1999) rather than $\log_{10}$ magnitudes in order to incorporate data with negative measured fluxes.
Step13: Note that, by default, all KDE options implemented in frankenz use some type of thresholding/clipping to avoid including portions of the PDFs with negligible weight and objects with negligible contributions to the overall stacked PDF. The default option is weight thresholding, where objects with $w < f_\min w_\max$ are excluded (with $f_\min = 10^{-3}$ by default). An alternative option is CDF thresholding, where objects that make up the $1 - c_\min$ portion of the sorted CDF are excluded (with $c_\min = 2 \times 10^{-4}$ by default). See the documentation for more details.
Redshift Distribution
Let's now compute our effective $N(z|\mathbf{g})$.
Step14: Comparison 1
Step15: To fit these objects, we will take advantage of the BruteForce object available through frankenz's fitting module.
Step16: We'll start by fitting our model grid and generating posterior and likelihood-weighted redshift predictions.
Step17: Now we'll generate predictions using our training (labeled) data. While we passed an explicit log-posterior earlier, all classes implemented in fitting default to using the logprob function from frankenz.pdf (which is just a thin wrapper for loglike that returns quantities in the proper format).
Step18: We see that the population redshift distribution $N(z|\mathbf{g})$ computed from our noisy fluxes is very close to that computed by the (approximate) BPZ posterior (which is "correct" by construction). These both differ markedly from the color-based likelihoods computed over our noiseless grid, demonstrating the impact of the prior for data observed at moderate/low signal-to-noise (S/N).
Comparison 2 | Python Code:
from __future__ import print_function, division
import sys
import pickle
import numpy as np
import scipy
import matplotlib
from matplotlib import pyplot as plt
from six.moves import range
# import frankenz code
import frankenz as fz
# plot in-line within the notebook
%matplotlib inline
np.random.seed(83481)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'axes.titlepad': '15.0'})
rcParams.update({'font.size': 30})
Explanation: Photometric Inference
This notebook outlines the basics of how to conduct basic redshift inference (i.e. a set of intrinsic labels) using photometry (i.e. a set of observed features).
Setup
End of explanation
survey = pickle.load(open('../data/mock_sdss_cww_bpz.pkl', 'rb')) # load data
types = survey.data['types'] # type flag
templates = survey.data['templates'] # template ID
redshifts = survey.data['redshifts'] # redshift
mags = survey.data['refmags'] # magnitude (reference)
phot_obs = survey.data['phot_obs'] # observed photometry
phot_err = survey.data['phot_err'] # photometry error
phot_true = survey.data['phot_true'] # true photometry
Nobs = len(types)
Explanation: Data
For our proof-of-concept tests, we will use the mock SDSS data we previously generated.
End of explanation
# plotting magnitude prior
plt.figure(figsize=(14, 4))
depths = np.array([f['depth_mag5sig'] for f in survey.filters])
mdepth = depths[survey.ref_filter]
mhigh = mdepth + 2.5 * np.log10(2)
mgrid = np.arange(14., mhigh + 0.01, 0.01)
plt.plot(mgrid, survey.pm(mgrid, mdepth), lw=5, color='navy')
plt.axvline(mdepth, ls='--', lw=5, color='black')
plt.xlabel(survey.filters[survey.ref_filter]['name'] + ' (mag)')
plt.xlim([14., mhigh])
plt.ylabel('P(mag)')
plt.ylim([0., None])
plt.yticks([])
plt.tight_layout()
# plotting prior
mgrid_sub = mgrid[::20]
Nmag = len(mgrid_sub)
zgrid = np.linspace(0., 4., 1000)
pgal_colors = plt.get_cmap('Reds')(np.linspace(0, 1, Nmag)) # PGAL colors
sgal_colors = plt.get_cmap('Purples')(np.linspace(0, 1, Nmag)) # SGAL colors
sb_colors = plt.get_cmap('Blues')(np.linspace(0, 1, Nmag)) # SB colors
plt.figure(figsize=(14, 12))
for i, color in zip(range(survey.NTYPE), [pgal_colors, sgal_colors, sb_colors]):
plt.subplot(3,1,i+1)
for j, c in zip(mgrid_sub, color):
pztm = [survey.pztm(z, i, j) for z in zgrid]
plt.plot(zgrid, pztm, lw=3, color=c, alpha=0.6)
plt.xlabel('Redshift')
plt.xlim([0, 4])
plt.ylabel('P({0}|mag)'.format(survey.TYPES[i]), fontsize=24)
plt.ylim([0., None])
plt.yticks([])
plt.tight_layout()
# plotting templates
tcolors = plt.get_cmap('viridis_r')(np.linspace(0., 1., survey.NTEMPLATE)) # template colors
xlow = min([min(f['wavelength']) for f in survey.filters]) # lower bound
xhigh = max([max(f['wavelength']) for f in survey.filters]) # upper bound
plt.figure(figsize=(14, 6))
for t, c in zip(survey.templates, tcolors):
wave, fnu, name = t['wavelength'], t['fnu'], t['name']
sel = (wave > xlow) & (wave < xhigh)
plt.semilogy(wave[sel], fnu[sel], lw=3, color=c,
label=name, alpha=0.7)
plt.xlim([xlow, xhigh])
plt.xticks(np.arange(3000., 11000.+1., 2000.))
plt.xlabel(r'Wavelength ($\AA$)')
plt.ylabel(r'$F_{\nu}$ (normalized)')
plt.legend(ncol=int(survey.NTEMPLATE/6 + 1), fontsize=13, loc=4)
plt.tight_layout()
Explanation: Inference with Noisy Redshifts
For every observed galaxy $g \in \mathbf{g}$ out of $N_\mathbf{g}$ galaxies, let's assume we have an associated noisy redshift estimate $\hat{z}g$ with PDF $P(\hat{z}_g | z)$. We are interested in constructing an estimate for the population redshift distribution $N(z|\mathbf{g})$ by projecting our results onto a relevant (possibly noisy) redshift basis $\lbrace \dots, P(\hat{z}_h|z) \equiv K(z|\hat{z}_h), \dots \rbrace$ indexed by $h \in \mathbf{h}$ with $N{\mathbf{h}}$ elements. The use of $K(\cdot|\cdot)$ instead of $P(\cdot|\cdot)$ here is used to suggest the use of an underlying redshift kernel. We will return to this later.
Abusing notation slightly, we can write our likelihood between $g$ and $h$ as
$$ \mathcal{L}(g|h) \equiv P(\hat{z}_g | \hat{z}_h) = \int P(\hat{z}_g | z) K(z | \hat{z}_h) dz $$
where we have marginalized over the true redshift $z$. Note that the likelihood is (by construction) unnormalized so that $\sum_g \mathcal{L}(g|h) \neq 1$.
Combined with a prior over our basis $P(h)$, we can then write the posterior between $h$ and $g$ using Bayes Theorem as
$$ P(h|g) = \frac{\mathcal{L}(g|h)\pi(h)}{\mathcal{Z}_g}
= \frac{\mathcal{L}(g|h)\pi(h)}{\sum_h \mathcal{L}(g|h) \pi(h)} $$
where $\pi(g)$ is the prior and $\mathcal{Z}_g$ is the evidence (i.e. marginal likelihood) of $g$.
We are interested in the number density of observed galaxies as a function of redshift, $N(z|\mathbf{g})$. We can define this as a weighted sum over our redshift basis
$$ N(z|\mathbf{g}) = \sum_h w_h(\mathbf{g}) \, K(z|\hat{z}_h) $$
where $w_h(\mathbf{g})$ are the associated weights. For now, we will take the ansatz that $w_h(\mathbf{g}) = \sum_g P(h|g)$, i.e. that we can estimate $N(z|\mathbf{g})$ by stacking all our galaxy PDFs. This isn't quite correct but is sufficient for our purposes here; we will illustrate how to derive these weights properly in a later notebook.
Inference with Noisy Photometry
Here, we want to do this same exercise over our set of observed $N_{\mathbf{b}}$-dimensional features $\mathbf{F}$ with PDF $P(\hat{\mathbf{F}}|g)$. The only difference from the case above is that we are dealing with observables $P(\hat{\mathbf{F}}|h)$ rather than kernels. Applying Bayes Theorem and emulating our previous example gives us
$$ \mathcal{L}(g|h) \equiv P(\mathbf{F}_g| \mathbf{F}_h)
= \int P(\hat{\mathbf{F}}_g | \mathbf{F}) P(\mathbf{F} | \hat{\mathbf{F}}_h) d\mathbf{F}
= \frac{\int P(\hat{\mathbf{F}}_g | \mathbf{F}) P(\hat{\mathbf{F}}_h | \mathbf{F}) \pi(\mathbf{F}) d\mathbf{F}}{\int P(\hat{\mathbf{F}}_h | \mathbf{F}) \pi(\mathbf{F}) d\mathbf{F}}$$
where we have now introduced $\pi(\mathbf{F})$ to be a $p$-dimensional prior over the true features. For our purposes, we will assume these set of features correspond to a set of observed flux densities $\hat{F}{i,b}$ in a set of $N{\mathbf{b}}$ photometric bands indexed by $b \in \mathbf{b}$.
In practice, $\mathbf{g}$ constitutes a set of unlabeled objects with unknown properties while $\mathbf{h}$ is a set of labeled objects with known properties. Labeled objects might constitute a particular "training set" (in machine learning-based applications) or a set of models (in template fitting-based applications).
We are interested in inferring the redshift PDF $P(z|g)$ for our observed object $g$ based on its observed photometry $\hat{\mathbf{F}}_g$. Given our labeled objects $\mathbf{h}$ with corresponding redshift kernels $K(z|h)$, this is just
$$
P(z|g) = \sum_h K(z|h)P(h|g) = \frac{\sum_h K(z|h)\mathcal{L}(g|h)\pi(h)}{\sum_h \mathcal{L}(g|h)\pi(h)}
$$
which corresponds to a posterior-weighted mixture of the $K(z|h)$ redshift kernels.
The "Big Data" Approximation
It is important to note that we've made a pretty big assumption here: that we can reduce a continuous process over $\mathbf{F}$ to a discrete set of comparisons over our training data $\mathbf{h}$. This choice constitutes a "Big Data" approximation that necessarily introduces some (Poisson) noise into our estimates, and is designed to take advantage of datasets where many ($\gtrsim 10^4$ or so) training objects are available such that our parameter space is (relatively) densely sampled. We will come back to this assumption later.
Our Prior
In this particular case, our prior $P(h)=P(z_h,t_h,m_h)$ is defined over a series of models parameterized by magnitude, type, and redshift as described in the Mock Data notebook. These are saved within our original survey object and briefly shown below.
End of explanation
# sample good example object
idx = np.random.choice(np.arange(Nobs)[(mags < 22.5) & (mags > 22)])
# compute loglikelihoods (noiseless)
ll, nb, chisq = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
phot_true, phot_err,
np.ones_like(phot_true),
free_scale=False, ignore_model_err=True,
dim_prior=False)
# compute loglikelihoods (noisy)
ptemp = np.random.normal(phot_true, phot_err) # re-jitter to avoid exact duplicates
ll2, nb2, chisq2 = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
ptemp, phot_err,
np.ones_like(phot_true),
free_scale=False, ignore_model_err=False,
dim_prior=False)
Explanation: Photometric Likelihoods
For most galaxies, we can take $P(\hat{\mathbf{F}}_i|\mathbf{F})$ to be a multivariate Normal (i.e. Gaussian) distribution such that
$$
P(\hat{\mathbf{F}}i|\mathbf{F}) = \mathcal{N}(\hat{\mathbf{F}}_i|\mathbf{F},\mathbf{C}_i)
\equiv \frac{\exp\left[-\frac{1}{2}||\hat{\mathbf{F}}_i-\mathbf{F}||{\mathbf{C}_i}^2\right]}{|2\pi\mathbf{C}_i|^{1/2}}
$$
where
$$
||\hat{\mathbf{F}}i-\mathbf{F}||{\mathbf{C}_i}^2 \equiv (\hat{\mathbf{F}}_i-\mathbf{F})^{\rm T}\mathbf{C}_i^{-1}(\hat{\mathbf{F}}_i-\mathbf{F})
$$
is the squared Mahalanobis distance between $\hat{\mathbf{F}}_i$ and $\mathbf{F}$ given covariance matrix $\mathbf{C}_i$ (i.e. the photometric errors), ${\rm T}$ is the transpose operator, and $|\mathbf{C}_i|$ is the determinant of $\mathbf{C}_g$.
While we will use matrix notation for compactness, in practice we will assume all our covariances are diagonal (i.e. the errors are independent) such that
$$
||\hat{\mathbf{F}}g-\mathbf{F}||{\mathbf{C}g}^2 = \sum{b} \frac{(\hat{F}{g,b}-F_b)^2}{\sigma^2{g,b}}
$$
Likelihood: Magnitudes (Scale-dependent)
We first look at the simplest case: a direct observational comparison over $\mathbf{F}$ (i.e. galaxy magnitudes).
The product of two multivariate Normal distributions $\mathcal{N}(\hat{\mathbf{F}}g|\mathbf{F},\mathbf{C}_g)$ and $\mathcal{N}(\hat{\mathbf{F}}_h|\mathbf{F},\mathbf{C}_h)$ is a scaled multivariate Normal of the form $S{gh}\,\mathcal{N}(\mathbf{F}{gh}|\mathbf{F},\mathbf{C}{gh})$ where
$$
S_{gh} \equiv \mathcal{N}(\hat{\mathbf{F}}g|\hat{\mathbf{F}}_h, \mathbf{C}_g + \mathbf{C}_h), \quad
\mathbf{F}{gh} \equiv \mathbf{C}{gh} \left( \mathbf{C}_g^{-1}\mathbf{F}_g
+ \mathbf{C}_h^{-1}\mathbf{F}_h \right), \quad
\mathbf{C}{gh} \equiv \left(\mathbf{C}_g^{-1} + \mathbf{C}_h^{-1}\right)^{-1}
$$
If we assume a uniform prior on our flux densities $P(\mathbf{F})=1$, our likelihood then becomes
$$ \mathcal{L}(g|h) = \int P(\hat{\mathbf{F}}g | \mathbf{F}) P(\hat{\mathbf{F}}_h | \mathbf{F}) d\mathbf{F}
= S{gh} \int \mathcal{N}(\mathbf{F}{gh}|\mathbf{F},\mathbf{C}{gh}) d\mathbf{F}
= S_{gh} $$
The log-likelihood can then be written as
\begin{equation}
\boxed{
-2\ln \mathcal{L}(g|h) = ||\mathbf{F}g - \mathbf{F}_h||{\mathbf{C}{g} + \mathbf{C}{h}}^2 + \ln|\mathbf{C}{g} + \mathbf{C}{h}| + N_\mathbf{b}\ln(2\pi)
}
\end{equation}
Let's compute an example PDF using frankenz for objects in our mock catalog. Since these are sampled from the prior, we've actually introduced our prior implicitly via the distribution of objects in our labeled sample. As a result, computing likelihoods directly in magnitudes actually probes (with some noise) the full posterior distribution (as defined by BPZ).
We will compare two versions of our results:
- Noiseless case: computed using the "true" underlying photometry underlying each training object.
- Noisy case: computed using our "observed" mock photometry.
End of explanation
# define plotting functions
try:
from scipy.special import logsumexp
except ImportError:
from scipy.misc import logsumexp
def plot_flux(phot_obs, phot_err, phot, logl,
ocolor='black', mcolor='blue', thresh=1e-1):
Plot SEDs.
wave = np.array([f['lambda_eff'] for f in survey.filters])
wt = np.exp(logl)
wtmax = wt.max()
sel = np.arange(len(phot))[wt > thresh * wtmax]
[plt.plot(wave, phot[i], alpha=wt[i]/wtmax*0.4, lw=3,
zorder=1, color=mcolor) for i in sel]
plt.errorbar(wave, phot_obs, yerr=phot_err, lw=3, color=ocolor, zorder=2)
plt.xlabel(r'Wavelength ($\AA$)')
plt.xlim([wave.min() - 100, wave.max() + 100])
plt.ylim([(phot_obs - phot_err).min() * 0.9, (phot_obs + phot_err).max() * 1.1])
plt.ylabel(r'$F_\nu$')
plt.yticks(fontsize=24)
plt.tight_layout()
def plot_redshift(redshifts, logl, ztrue=None, color='yellow',
tcolor='red'):
Plot redshift PDF.
n, _, _ = plt.hist(redshifts, bins=zgrid, weights=np.exp(logl),
histtype='stepfilled', edgecolor='black',
lw=3, color=color, alpha=0.8)
if ztrue is not None:
plt.vlines(ztrue, 0., n.max() * 1.1, color=tcolor, linestyles='--', lw=2)
plt.xlabel('Redshift')
plt.ylabel('PDF')
plt.xlim([zgrid[0], zgrid[-1]])
plt.ylim([0., n.max() * 1.1])
plt.yticks([])
plt.tight_layout()
def plot_zt(redshifts, templates, logl, ztrue=None, ttrue=None,
cmap='viridis', tcolor='red', thresh=1e-2):
Plot joint template-redshift PDF.
lsum = logsumexp(logl)
wt = np.exp(logl - lsum)
plt.hist2d(redshifts, templates, bins=[zgrid, tgrid],
weights=wt,
cmin=thresh*max(wt),
cmap=cmap)
if ttrue is not None:
plt.hlines(ttrue, zgrid.min(), zgrid.max(),
color=tcolor, lw=2, linestyles='--')
if ztrue is not None:
plt.vlines(ztrue, tgrid.min(), tgrid.max(),
color=tcolor, lw=2, linestyles='--')
plt.xlabel('Redshift')
plt.ylabel('Template')
plt.xlim([zgrid[0], zgrid[-1]])
plt.ylim([tgrid[0], tgrid[-1]])
plt.tight_layout()
# plot flux distribution
plt.figure(figsize=(16, 14))
plt.subplot(3,2,1)
plot_flux(phot_obs[idx], phot_err[idx], phot_true, ll,
ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Noiseless (mag)')
plt.subplot(3,2,2)
plot_flux(phot_obs[idx], phot_err[idx], ptemp, ll2,
ocolor='black', mcolor='red', thresh=0.5)
plt.title('Noisy (mag)');
# plot redshift distribution
zgrid = np.arange(0., 4. + 0.1, 0.05)
plt.subplot(3,2,3)
plot_redshift(redshifts, ll, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,2,4)
plot_redshift(redshifts, ll2, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
# plot redshift-type joint distribution
tgrid = np.arange(survey.NTEMPLATE + 1) - 0.5
plt.subplot(3,2,5)
plot_zt(redshifts, templates, ll,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,2,6)
plot_zt(redshifts, templates, ll2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5);
Explanation: Note that the log-likelihood function defined in frankenz contains a number of additional options that have been specified above. These will be discussed later.
End of explanation
# compute color loglikelihoods (noiseless)
llc, nbc, chisq, s, serr = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
phot_true, phot_err,
np.ones_like(phot_true),
dim_prior=False, free_scale=True,
ignore_model_err=True, return_scale=True)
# compute color loglikelihoods (noisy)
llc2, nbc2, chisq2, s2, serr2 = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
ptemp, phot_err,
np.ones_like(phot_true),
dim_prior=False, free_scale=True,
ignore_model_err=False, return_scale=True)
# plot flux distribution
plt.figure(figsize=(16, 14))
plt.subplot(3,2,1)
plot_flux(phot_obs[idx], phot_err[idx], s[:, None] * phot_true,
llc, ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Noiseless (color)')
plt.subplot(3,2,2)
plot_flux(phot_obs[idx], phot_err[idx], s2[:, None] * ptemp,
llc2, ocolor='black', mcolor='red', thresh=0.5)
plt.title('Noisy (color)');
# plot redshift distribution
plt.subplot(3,2,3)
plot_redshift(redshifts, llc, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,2,4)
plot_redshift(redshifts, llc2, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
# plot redshift-type joint distribution
plt.subplot(3,2,5)
plot_zt(redshifts, templates, llc, thresh=1e-2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,2,6)
plot_zt(redshifts, templates, llc2, thresh=1e-2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5);
Explanation: As expected, the PDF computed from our noisy photometry is broader than than the noiseless case.
Likelihood: Colors
We can also defined our likelihoods in terms of flux ratios (i.e. galaxy "colors") by introducing a scaling parameter $\ell$. Assuming $P(\mathbf{F},\ell) = 1$ is uniform this takes the form
$$
\mathcal{L}\ell(g|h) = \int \mathcal{N}\left(\hat{\mathbf{F}}_g | \ell\hat{\mathbf{F}}_h, \mathbf{C}{g}+\ell^2\mathbf{C}_{h} \right)\,d\ell
$$
Although this integral does not have an analytic solution, we can numerically solve for the maximum-likelihood result $\mathcal{L}(g|h, \ell_{\rm ML})$. See Leistedt & Hogg (2017) for some additional discussion related to this integral.
If we assume, however, that $\mathbf{C}_h = \mathbf{0}$ (i.e. no model errors), then there is an analytic solution with log-likelihood
\begin{equation}
\boxed{
-2\ln \mathcal{L}\ell(g|h) = ||\hat{\mathbf{F}}_g - \ell{\rm ML}\hat{\mathbf{F}}h||{\mathbf{C}{g}}^2 + N\mathbf{b}\ln(2\pi) + \ln|\mathbf{C}_{g}|
}
\end{equation}
where
$$
\ell_{\rm ML} = \frac{\hat{\mathbf{F}}_g^{\rm T} \mathbf{C}_g^{-1} \hat{\mathbf{F}}_h}
{\hat{\mathbf{F}}_h^{\rm T} \mathbf{C}_g^{-1} \hat{\mathbf{F}}_h}
$$
We can now repeat the above exercise using our color-based likelihoods. As above, we compare two versions:
- Noiseless case: computed using the "true" underlying photometry underlying each training object.
- Noisy case: computed using our "observed" mock photometry.
End of explanation
# compute color loglikelihoods over grid
mphot = survey.models['data'].reshape(-1, survey.NFILTER)
merr = np.zeros_like(mphot)
mmask = np.ones_like(mphot)
llm, nbm, chisqm, sm, smerr = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
mphot, merr, mmask,
dim_prior=False, free_scale=True,
ignore_model_err=True, return_scale=True)
# compute prior
mzgrid = survey.models['zgrid']
prior = np.array([fz.priors.bpz_pz_tm(mzgrid, t, mags[idx])
for t in survey.TTYPE]).T.flatten()
# plot flux distribution
plt.figure(figsize=(24, 15))
plt.subplot(3,3,1)
plot_flux(phot_obs[idx], phot_err[idx], sm[:, None] * mphot,
llm, ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Likelihood (grid)')
plt.subplot(3,3,2)
plot_flux(phot_obs[idx], phot_err[idx], sm[:, None] * mphot,
llm + np.log(prior).flatten(),
ocolor='black', mcolor='red', thresh=0.5)
plt.title('Posterior (grid)')
plt.subplot(3,3,3)
plot_flux(phot_obs[idx], phot_err[idx], phot_true, ll,
ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Mag Likelihood\n(noiseless samples)')
# plot redshift distribution
mredshifts = np.array([mzgrid for i in range(survey.NTEMPLATE)]).T.flatten()
plt.subplot(3,3,4)
plot_redshift(mredshifts, llm, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,3,5)
plot_redshift(mredshifts, llm + np.log(prior).flatten(),
ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,3,6)
plot_redshift(redshifts, ll, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
# plot redshift-type joint distribution
mtemplates = np.array([np.arange(survey.NTEMPLATE)
for i in range(len(mzgrid))]).flatten()
plt.subplot(3,3,7)
plot_zt(mredshifts, mtemplates, llm, thresh=1e-2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,3,8)
plot_zt(mredshifts, mtemplates, llm + np.log(prior).flatten(),
thresh=1e-2, ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,3,9)
plot_zt(redshifts, templates, ll,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5);
Explanation: Finally, it's useful to compare the case where we compute our posteriors directly from our underlying model grid and apply our priors directly. By construction, this should agree with the "true" posterior distribution up to the approximation that for a given template $t$ and redshift $z$ we can take the model to have a magnitude based on $\ell_{\rm ML}$ rather than integrating over the full $\pi(\ell)$ distribution, i.e.
$$
\int \pi(\ell) \mathcal{N}\left(\hat{\mathbf{F}}g | \ell\hat{\mathbf{F}}_h, \mathbf{C}{g}+\ell^2\mathbf{C}{h} \right)\,d\ell \approx \pi(\ell{\rm ML}) \mathcal{L}(g|h, \ell_{\rm ML})
$$
End of explanation
sel = (phot_obs / phot_err)[:, survey.ref_filter] > 5. # S/N > 5 cut
Nsel = sel.sum()
Ntrain, Ntest = 60000, 5000
train_sel = np.arange(Nobs)[sel][:Ntrain] # training set
test_sel = np.arange(Nobs)[sel][Ntrain:Ntrain+Ntest] # testing set
Nmodel = len(mphot)
print('Number of observed galaxies (all):', Nobs)
print('Number of observed galaxies (selected):', Nsel)
print('Number of models:', Nmodel)
print('Number of training galaxies:', Ntrain)
print('Number of testing galaxies:', Ntest)
Explanation: As expected, the secondary solutions seen in our grid-based likelihoods are suppressed by our prior, which indicates many of these solutions are distinctly unphysical (at least given the original assumptions used when constructing our mock).
In addition, the BPZ posterior computed over our grid of models agrees quite well with the noiseless magnitude-based likelihoods computed over our noiseless samples (i.e. our labeled "training" data). This demonstrates that an utilizing an unbiased, representative training set instead of a grid of models inherently gives access to complex priors that otherwise have to be modeled analytically. In other words, we can take $P(h) = 1$ for all $h \in \mathbf{h}$ since the distribution of our labeled photometric samples probes the underlying $P(z, t, m)$ distribution.
In practice, however, we do not often have access to a fully representative training sample, and often must derive an estimate of $P(\mathbf{h})$ through other means. We will return to this point later.
Population Tests
We now want to see how things look on a larger sample of objects.
End of explanation
# initialize asinh magnitudes ("Luptitudes")
flux_zeropoint = 10**(-0.4 * -23.9) # AB magnitude zeropoint
fdepths = np.array([f['depth_flux1sig'] for f in survey.filters])
mag, magerr = fz.pdf.luptitude(phot_obs, phot_err, skynoise=fdepths,
zeropoints=flux_zeropoint)
# initialize magnitude dictionary
mdict = fz.pdf.PDFDict(pdf_grid=np.arange(-20., 60., 5e-3),
sigma_grid=np.linspace(0.01, 5., 500))
Explanation: Sidenote: KDE in frankenz
One of the ways frankenz differs from other photometric redshift (photo-z) codes is that it tries to avoid discretizing quantities whenever and wherever possible. Since redshifts, flux densities, and many other photometric quantities are continuous with smooth PDFs, we attempt to work directly in this continuous space whenever possible instead of resorting to binning.
We accomplish this through kernel density estimation (KDE). Since almost all photometric observable PDFs are Gaussian, by connecting each observable with an associated Gaussian kernel density we can (in theory) construct a density estimate at any location in parameter space by evaluating the probability density of all kernels at that location.
In practice, such a brute-force approach is prohibitively computationally expensive. Instead, we approximate the contribution from any particular object by:
evaluating only a small subset of "nearby" kernels,
evaluating the overall kernel density estimates over a discrete basis, and
evaluating only the "central regions" of our kernels.
This is implemented within the gauss_kde function in frankenz's pdf module.
In addition, we can also use a stationary pre-computed dictionary of Gaussian kernels to discretize our operations. This avoids repetitive, expensive computations at the (very small) cost of increased memory overhead and errors from imposing a minimum resolution. This is implemented via the PDFDict class and the gauss_kde_dict function. We will use the option whenever possible going forward.
Magnitude Distribution
Let's use this functionality to visualize the stacked magnitude distribution of our population.
End of explanation
# plotting magnitude distribution
msmooth = 0.05
fcolors = plt.get_cmap('viridis')(np.linspace(0,1, survey.NFILTER))
plt.figure(figsize=(20, 10))
for i in range(survey.NFILTER):
plt.subplot(2, int(survey.NFILTER/2)+1, i+1)
# compute pdf (all)
magerr_t = np.sqrt(magerr[:, i]**2 + msmooth**2)
mag_pdf = fz.pdf.gauss_kde_dict(mdict, y=mag[:, i],
y_std=magerr_t)
plt.semilogy(mdict.grid, mag_pdf, lw=3,
color=fcolors[i])
# compute pdf (selected)
magsel_pdf = fz.pdf.gauss_kde_dict(mdict, y=mag[sel, i],
y_std=magerr_t[sel])
plt.semilogy(mdict.grid, magsel_pdf, lw=3,
color=fcolors[i], ls='--')
# prettify
plt.xlim([16, 30])
plt.ylim([1., mag_pdf.max() * 1.2])
plt.xticks(np.arange(16, 30, 4))
plt.xlabel(survey.filters[i]['name'] + '-band Luptitude')
plt.ylabel('log(Counts)')
plt.tight_layout()
Explanation: Note that we've used asinh magnitudes (i.e. "Luptitudes"; Lupton et al. 1999) rather than $\log_{10}$ magnitudes in order to incorporate data with negative measured fluxes.
End of explanation
# initialize redshift dictionary
rdict = fz.pdf.PDFDict(pdf_grid=np.arange(0., 7.+1e-5, 0.01),
sigma_grid=np.linspace(0.005, 2., 500))
# plotting redshift distribution
plt.figure(figsize=(14, 6))
rsmooth = 0.05
# all
zerr_t = np.ones_like(redshifts) * rsmooth
z_pdf = fz.pdf.gauss_kde_dict(rdict, y=redshifts,
y_std=zerr_t)
plt.plot(rdict.grid, z_pdf / z_pdf.sum(), lw=5, color='black')
plt.fill_between(rdict.grid, z_pdf / z_pdf.sum(), color='gray',
alpha=0.4, label='Underlying')
# selected
zsel_pdf = fz.pdf.gauss_kde_dict(rdict, y=redshifts[sel],
y_std=zerr_t[sel])
plt.plot(rdict.grid, zsel_pdf / zsel_pdf.sum(), lw=5, color='navy')
plt.fill_between(rdict.grid, zsel_pdf / zsel_pdf.sum(),
color='blue', alpha=0.4, label='Observed')
# prettify
plt.xlim([0, 4])
plt.ylim([0, None])
plt.yticks([])
plt.legend(fontsize=20)
plt.xlabel('Redshift')
plt.ylabel('$N(z|\mathbf{g})$')
plt.tight_layout()
Explanation: Note that, by default, all KDE options implemented in frankenz use some type of thresholding/clipping to avoid including portions of the PDFs with negligible weight and objects with negligible contributions to the overall stacked PDF. The default option is weight thresholding, where objects with $w < f_\min w_\max$ are excluded (with $f_\min = 10^{-3}$ by default). An alternative option is CDF thresholding, where objects that make up the $1 - c_\min$ portion of the sorted CDF are excluded (with $c_\min = 2 \times 10^{-4}$ by default). See the documentation for more details.
Redshift Distribution
Let's now compute our effective $N(z|\mathbf{g})$.
End of explanation
# initialize datasets
phot_train, phot_test = phot_obs[train_sel], phot_obs[test_sel]
err_train, err_test = phot_err[train_sel], phot_err[test_sel]
mask_train, mask_test = np.ones_like(phot_train), np.ones_like(phot_test)
Explanation: Comparison 1: Mag (samples) vs Color (grid)
As a first proof of concept, we want to just check whether the population distribution inferred from our samples (using magnitudes) agree with those inferred from our underlying model grid (using colors).
End of explanation
from frankenz.fitting import BruteForce
# initialize BruteForce objects
model_BF = BruteForce(mphot, merr, mmask) # model grid
train_BF = BruteForce(phot_train, err_train, mask_train) # training data
# define log(posterior) function
def lprob_bpz(x, xe, xm, ys, yes, yms,
mzgrid=None, ttypes=None, ref=None):
results = fz.pdf.loglike(x, xe, xm, ys, yes, yms,
ignore_model_err=True,
free_scale=True)
lnlike, ndim, chi2 = results
mag = -2.5 * np.log10(x[ref]) + 23.9
prior = np.array([fz.priors.bpz_pz_tm(mzgrid, t, mag)
for t in ttypes]).T.flatten()
lnprior = np.log(prior)
return lnprior, lnlike, lnlike + lnprior, ndim, chi2
Explanation: To fit these objects, we will take advantage of the BruteForce object available through frankenz's fitting module.
End of explanation
# fit data
model_BF.fit(phot_test, err_test, mask_test, lprob_func=lprob_bpz,
lprob_args=[mzgrid, survey.TTYPE, survey.ref_filter])
# compute posterior-weighted redshift PDFs
mredshifts = np.array([mzgrid for i in range(survey.NTEMPLATE)]).T.flatten()
pdfs_post = model_BF.predict(mredshifts, np.ones_like(mredshifts) * rsmooth,
label_dict=rdict)
# compute likelihood-weighted redshift PDFs
mredshifts = np.array([mzgrid for i in range(survey.NTEMPLATE)]).T.flatten()
pdfs_like = model_BF.predict(mredshifts, np.ones_like(mredshifts) * rsmooth,
label_dict=rdict, logwt=model_BF.fit_lnlike)
Explanation: We'll start by fitting our model grid and generating posterior and likelihood-weighted redshift predictions.
End of explanation
pdfs_train = train_BF.fit_predict(phot_test, err_test, mask_test,
redshifts[train_sel],
np.ones_like(train_sel) * rsmooth,
label_dict=rdict, save_fits=False)
# true distribution
zpdf0 = fz.pdf.gauss_kde_dict(rdict, y=redshifts[test_sel],
y_std=np.ones_like(test_sel) * rsmooth)
# plotting
plt.figure(figsize=(14, 6))
plt.plot(rdict.grid, zpdf0, lw=5, color='black',
label='Underlying')
plt.plot(rdict.grid, pdfs_like.sum(axis=0),
lw=5, color='gray', alpha=0.7,
label='BPZ Color Likelihood (grid)')
plt.plot(rdict.grid, pdfs_post.sum(axis=0),
lw=5, color='red', alpha=0.6,
label='BPZ Color Posterior (grid)')
plt.plot(rdict.grid, pdfs_train.sum(axis=0),
lw=5, color='blue', alpha=0.6,
label='Mag Likelihood (samples)')
plt.xlim([0., 6.])
plt.ylim([0., None])
plt.yticks([])
plt.legend(fontsize=20)
plt.xlabel('Redshift')
plt.ylabel('$N(z|\mathbf{g})$')
plt.tight_layout()
Explanation: Now we'll generate predictions using our training (labeled) data. While we passed an explicit log-posterior earlier, all classes implemented in fitting default to using the logprob function from frankenz.pdf (which is just a thin wrapper for loglike that returns quantities in the proper format).
End of explanation
pdfs_train_c = train_BF.fit_predict(phot_test, err_test, mask_test,
redshifts[train_sel],
np.ones_like(train_sel) * rsmooth,
lprob_kwargs={'free_scale': True,
'ignore_model_err': True},
label_dict=rdict, save_fits=False)
pdfs_train_cerr = train_BF.fit_predict(phot_test, err_test, mask_test,
redshifts[train_sel],
np.ones_like(train_sel) * rsmooth,
lprob_kwargs={'free_scale': True,
'ignore_model_err': False},
label_dict=rdict, save_fits=False)
# plotting
plt.figure(figsize=(14, 6))
plt.plot(rdict.grid, zpdf0, lw=3, color='black',
label='Underlying')
plt.plot(rdict.grid, pdfs_train.sum(axis=0),
lw=3, color='blue', alpha=0.6,
label='Mag Likelihood (samples; w/ errors)')
plt.plot(rdict.grid, pdfs_train_c.sum(axis=0),
lw=3, color='seagreen', alpha=0.6,
label='Color Likelihood (samples; w/o errors)')
plt.plot(rdict.grid, pdfs_train_cerr.sum(axis=0),
lw=3, color='firebrick', alpha=0.8,
label='Color Likelihood (samples; w/ errors)')
plt.xlim([0., 6.])
plt.ylim([0., None])
plt.yticks([])
plt.legend(fontsize=20)
plt.xlabel('Redshift')
plt.ylabel('$N(z|\mathbf{g})$')
plt.tight_layout()
Explanation: We see that the population redshift distribution $N(z|\mathbf{g})$ computed from our noisy fluxes is very close to that computed by the (approximate) BPZ posterior (which is "correct" by construction). These both differ markedly from the color-based likelihoods computed over our noiseless grid, demonstrating the impact of the prior for data observed at moderate/low signal-to-noise (S/N).
Comparison 2: Mag (samples) vs Color (samples)
Just for completeness, we also show the difference between computing our results using magnitudes (as above) vs color, with and without accounting for observational errors.
End of explanation |
783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the MXCuBE jupyter Notebook service!
Press "Shift + Enter" to proceed
Step1: Try to load some hardware objects defined in the xml-qt
Step2: Use dir to see available methods and variables | Python Code:
import os
import sys
cwd = os.getcwd()
print cwd
mxcube_root = cwd[:-4]
print mxcube_root
sys.path.insert(0, mxcube_root)
from HardwareRepository import HardwareRepository
#print "MXCuBE home directory: %s" % cwd
hwr_server = mxcube_root + "/HardwareRepository/configuration/xml-qt"
HardwareRepository.setHardwareRepositoryServer(hwr_server)
hardware_repository = HardwareRepository.HardwareRepository()
hardware_repository.connect()
HardwareRepository.add_hardware_objects_dirs([mxcube_root + "/HardwareObjects"])
Explanation: Welcome to the MXCuBE jupyter Notebook service!
Press "Shift + Enter" to proceed
End of explanation
energy_hwobj = hardware_repository.get_hardware_object("energy-mockup")
attenuators_hwobj = hardware_repository.get_hardware_object("attenuators-mockup")
detector_hwobj = hardware_repository.get_hardware_object("detector-mockup")
mach_info_hwobj = hardware_repository.get_hardware_object("mach-info-mockup")
resolution_hwobj = hardware_repository.get_hardware_object("resolution-mockup")
transmission_hwobj = hardware_repository.get_hardware_object("transmission-mockup")
print energy_hwobj.energy_value
print attenuators_hwobj.value
print resolution_hwobj.currentResolution
Explanation: Try to load some hardware objects defined in the xml-qt:
End of explanation
print dir(energy_hwobj)
energy_hwobj.getChannel("chanTEST")
Explanation: Use dir to see available methods and variables
End of explanation |
784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Non-Personalized Recommenders
The recommendation problem
Recommenders have been around since at least 1992. Today we see different flavours of recommenders, deployed across different verticals
Step1: The CourseTalk dataset
Step2: Using pd.merge we get it all into one big DataFrame.
Step3: Collaborative filtering
Step4: Now let's filter down to courses that received at least 20 ratings (a completely arbitrary number);
To do this, I group the data by course_id and use size() to get a Series of group sizes for each title
Step5: The index of titles receiving at least 20 ratings can then be used to select rows from mean_ratings above
Step6: By computing the mean rating for each course, we will order with the highest rating listed first.
Step7: To see the top courses among Coursera students, we can sort by the 'Coursera' column in descending order
Step8: Now, let's go further! How about rank the courses with the highest percentage of ratings that are 4 or higher ? % of ratings 4+
Let's start with a simple pivoting example that does not involve any aggregation. We can extract a ratings matrix as follows
Step9: Let's extract only the rating that are 4 or higher.
Step10: Now picking the number of total ratings for each course and the count of ratings 4+ , we can merge them into one DataFrame.
Step11: Let's now go easy. Let's count the number of ratings for each course, and order with the most number of ratings.
Step12: Considering this information we can sort by the most rated ones with highest percentage of 4+ ratings.
Step13: Finally using the formula above that we learned, let's find out what the courses that most often occur wit the popular MOOC An introduction to Interactive Programming with Python by using the method "x + y/ x" . For each course, calculate the percentage of Programming with python raters who also rated that course. Order with the highest percentage first, and voilá we have the top 5 moocs.
Step14: First, let's get only the users that rated the course An Introduction to Interactive Programming in Python
Step15: Now, for all other courses let's filter out only the ratings from users that rated the Python course.
Step16: By applying the division
Step17: Ordering by the score, highest first excepts the first one which contains the course itself. | Python Code:
from IPython.core.display import Image
Image(filename='/Users/chengjun/GitHub/cjc2016/figure/recsys_arch.png')
Explanation: Introduction to Non-Personalized Recommenders
The recommendation problem
Recommenders have been around since at least 1992. Today we see different flavours of recommenders, deployed across different verticals:
Amazon
Netflix
Facebook
Last.fm.
What exactly do they do?
Definitions from the literature
In a typical recommender system people provide recommendations as inputs, which
the system then aggregates and directs to appropriate recipients. -- Resnick
and Varian, 1997
Collaborative filtering simply means that people collaborate to help one
another perform filtering by recording their reactions to documents they read.
-- Goldberg et al, 1992
In its most common formulation, the recommendation problem is reduced to the
problem of estimating ratings for the items that have not been seen by a
user. Intuitively, this estimation is usually based on the ratings given by this
user to other items and on some other information [...] Once we can estimate
ratings for the yet unrated items, we can recommend to the user the item(s) with
the highest estimated rating(s). -- Adomavicius and Tuzhilin, 2005
Driven by computer algorithms, recommenders help consumers
by selecting products they will probably like and might buy
based on their browsing, searches, purchases, and preferences. -- Konstan and Riedl, 2012
Notation
$U$ is the set of users in our domain. Its size is $|U|$.
$I$ is the set of items in our domain. Its size is $|I|$.
$I(u)$ is the set of items that user $u$ has rated.
$-I(u)$ is the complement of $I(u)$ i.e., the set of items not yet seen by user $u$.
$U(i)$ is the set of users that have rated item $i$.
$-U(i)$ is the complement of $U(i)$.
Goal of a recommendation system
$ \newcommand{\argmax}{\mathop{\rm argmax}\nolimits} \forall{u \in U},\; i^* = \argmax_{i \in -I(u)} [S(u,i)] $
Problem statement
The recommendation problem in its most basic form is quite simple to define:
|-------------------+-----+-----+-----+-----+-----|
| user_id, movie_id | m_1 | m_2 | m_3 | m_4 | m_5 |
|-------------------+-----+-----+-----+-----+-----|
| u_1 | ? | ? | 4 | ? | 1 |
|-------------------+-----+-----+-----+-----+-----|
| u_2 | 3 | ? | ? | 2 | 2 |
|-------------------+-----+-----+-----+-----+-----|
| u_3 | 3 | ? | ? | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_4 | ? | 1 | 2 | 1 | 1 |
|-------------------+-----+-----+-----+-----+-----|
| u_5 | ? | ? | ? | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_6 | 2 | ? | 2 | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_7 | ? | ? | ? | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_8 | 3 | 1 | 5 | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_9 | ? | ? | ? | ? | 2 |
|-------------------+-----+-----+-----+-----+-----|
Given a partially filled matrix of ratings ($|U|x|I|$), estimate the missing values.
Challenges
Availability of item metadata
Content-based techniques are limited by the amount of metadata that is available
to describe an item. There are domains in which feature extraction methods are
expensive or time consuming, e.g., processing multimedia data such as graphics,
audio/video streams. In the context of grocery items for example, it's often the
case that item information is only partial or completely missing. Examples
include:
Ingredients
Nutrition facts
Brand
Description
County of origin
New user problem
A user has to have rated a sufficient number of items before a recommender
system can have a good idea of what their preferences are. In a content-based
system, the aggregation function needs ratings to aggregate.
New item problem
Collaborative filters rely on an item being rated by many users to compute
aggregates of those ratings. Think of this as the exact counterpart of the new
user problem for content-based systems.
Data sparsity
When looking at the more general versions of content-based and collaborative
systems, the success of the recommender system depends on the availability of a
critical mass of user/item iteractions. We get a first glance at the data
sparsity problem by quantifying the ratio of existing ratings vs $|U|x|I|$. A
highly sparse matrix of interactions makes it difficult to compute similarities
between users and items. As an example, for a user whose tastes are unusual
compared to the rest of the population, there will not be any other users who
are particularly similar, leading to poor recommendations.
Flow chart: the big picture
End of explanation
import pandas as pd
unames = ['user_id', 'username']
users = pd.read_table('/Users/chengjun/GitHub/cjc2016/data/users_set.dat',
sep='|', header=None, names=unames)
rnames = ['user_id', 'course_id', 'rating']
ratings = pd.read_table('/Users/chengjun/GitHub/cjc2016/data/ratings.dat',
sep='|', header=None, names=rnames)
mnames = ['course_id', 'title', 'avg_rating', 'workload', 'university', 'difficulty', 'provider']
courses = pd.read_table('/Users/chengjun/GitHub/cjc2016/data/cursos.dat',
sep='|', header=None, names=mnames)
# show how one of them looks
ratings.head(10)
# show how one of them looks
users[:5]
courses[:5]
Explanation: The CourseTalk dataset: loading and first look
Loading of the CourseTalk database.
The CourseTalk data is spread across three files. Using the pd.read_table
method we load each file:
End of explanation
coursetalk = pd.merge(pd.merge(ratings, courses), users)
coursetalk
coursetalk.ix[0]
Explanation: Using pd.merge we get it all into one big DataFrame.
End of explanation
dir(pivot_table)
from pandas import pivot_table
mean_ratings = pivot_table(coursetalk, values = 'rating', columns='provider', aggfunc='mean')
mean_ratings.order(ascending=False)
Explanation: Collaborative filtering: generalizations of the aggregation function
Non-personalized recommendations
Groupby
The idea of groupby is that of split-apply-combine:
split data in an object according to a given key;
apply a function to each subset;
combine results into a new object.
To get mean course ratings grouped by the provider, we can use the pivot_table method:
End of explanation
ratings_by_title = coursetalk.groupby('title').size()
ratings_by_title[:10]
active_titles = ratings_by_title.index[ratings_by_title >= 20]
active_titles[:10]
Explanation: Now let's filter down to courses that received at least 20 ratings (a completely arbitrary number);
To do this, I group the data by course_id and use size() to get a Series of group sizes for each title:
End of explanation
mean_ratings = coursetalk.pivot_table('rating', columns='title', aggfunc='mean')
mean_ratings
Explanation: The index of titles receiving at least 20 ratings can then be used to select rows from mean_ratings above:
End of explanation
mean_ratings.ix[active_titles].order(ascending=False)
Explanation: By computing the mean rating for each course, we will order with the highest rating listed first.
End of explanation
mean_ratings = coursetalk.pivot_table('rating', index='title',columns='provider', aggfunc='mean')
mean_ratings[:10]
mean_ratings['coursera'][active_titles].order(ascending=False)[:10]
Explanation: To see the top courses among Coursera students, we can sort by the 'Coursera' column in descending order:
End of explanation
# transform the ratings frame into a ratings matrix
ratings_mtx_df = coursetalk.pivot_table(values='rating',
index='user_id',
columns='title')
ratings_mtx_df.ix[ratings_mtx_df.index[:15], ratings_mtx_df.columns[:15]]
Explanation: Now, let's go further! How about rank the courses with the highest percentage of ratings that are 4 or higher ? % of ratings 4+
Let's start with a simple pivoting example that does not involve any aggregation. We can extract a ratings matrix as follows:
End of explanation
ratings_gte_4 = ratings_mtx_df[ratings_mtx_df>=4.0]
# with an integer axis index only label-based indexing is possible
ratings_gte_4.ix[ratings_gte_4.index[:15], ratings_gte_4.columns[:15]]
Explanation: Let's extract only the rating that are 4 or higher.
End of explanation
ratings_gte_4_pd = pd.DataFrame({'total': ratings_mtx_df.count(), 'gte_4': ratings_gte_4.count()})
ratings_gte_4_pd.head(10)
ratings_gte_4_pd['gte_4_ratio'] = (ratings_gte_4_pd['gte_4'] * 1.0)/ ratings_gte_4_pd.total
ratings_gte_4_pd.head(10)
ranking = [(title,total,gte_4, score) for title, total, gte_4, score in ratings_gte_4_pd.itertuples()]
for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[3], x[2], x[1]) , reverse=True)[:10]:
print title, total, gte_4, score
Explanation: Now picking the number of total ratings for each course and the count of ratings 4+ , we can merge them into one DataFrame.
End of explanation
ratings_by_title = coursetalk.groupby('title').size()
ratings_by_title.order(ascending=False)[:10]
Explanation: Let's now go easy. Let's count the number of ratings for each course, and order with the most number of ratings.
End of explanation
for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[2], x[3], x[1]) , reverse=True)[:10]:
print title, total, gte_4, score
Explanation: Considering this information we can sort by the most rated ones with highest percentage of 4+ ratings.
End of explanation
course_users = coursetalk.pivot_table('rating', index='title', columns='user_id')
course_users.ix[course_users.index[:15], course_users.columns[:15]]
Explanation: Finally using the formula above that we learned, let's find out what the courses that most often occur wit the popular MOOC An introduction to Interactive Programming with Python by using the method "x + y/ x" . For each course, calculate the percentage of Programming with python raters who also rated that course. Order with the highest percentage first, and voilá we have the top 5 moocs.
End of explanation
ratings_by_course = coursetalk[coursetalk.title == 'An Introduction to Interactive Programming in Python']
ratings_by_course.set_index('user_id', inplace=True)
Explanation: First, let's get only the users that rated the course An Introduction to Interactive Programming in Python
End of explanation
their_ids = ratings_by_course.index
their_ratings = course_users[their_ids]
course_users[their_ids].ix[course_users[their_ids].index[:15], course_users[their_ids].columns[:15]]
Explanation: Now, for all other courses let's filter out only the ratings from users that rated the Python course.
End of explanation
course_count = their_ratings.ix['An Introduction to Interactive Programming in Python'].count()
sims = their_ratings.apply(lambda profile: profile.count() / float(course_count) , axis=1)
Explanation: By applying the division: number of ratings who rated Python Course and the given course / total of ratings who rated the Python Course we have our percentage.
End of explanation
sims.order(ascending=False)[1:][:10]
Explanation: Ordering by the score, highest first excepts the first one which contains the course itself.
End of explanation |
785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulate the model and make Figure SI-1
Imports
First run all of the code in this section to import the necessary packages.
First we load some magic commands
Step1: Next load some standard modules. If you do not have one of these modules (such as progressbar or joblib), then run, for example, !pip install progressbar to install it using pip.
Step2: Set style parameters for matplotlib figures
Step3: Set the path for saving the figures
Step4: Import the code for simulating the model
Step5: Panel (a)
Step6: This confirms that we have 1000 simulations for each quadruple (r, xi, n_agents, init_F).
Step7: Save the data to the hard drive as a CSV file
Step8: Load long_run_results from the hard drive
Run the code below to load the results from the CSV file in order to avoid having to re-run the simulations above (which takes about 90 minutes)
Step9: Panel (b)
Step10: This takes about 22 minutes to run
Step11: Plot some time-series from the simulation
Step12: Simulate the model with sticky links and preferential attachment
Set up the simulation
Step13: This takes about 25 minutes to run
Step14: Plot some time-series from the simulation
Step15: Save and load the two simulations above using pickle
Save (pickle) the simulations to file sim_N1000_alpha0p15_beta0p4eps0p0001_initF0p7.pkl
Step16: Load the simulations from the pickle file sim_N1000_alpha0p15_beta0p4eps0p0001_initF0p7.pkl
Step17: Make Figure SI-1
The cell below makes Figure SI-1 and saves it to the folder figures as a PDF.
Step18: Check statistical significance of the difference in means in Figure SI-1(a)
In the cell below, we find that the means of $F(1000)$ are statistically significantly different between the two models for $F(0) = 0.155, 0.16, 0.165, ..., 0.2$ according to the two-sided Mann-Whitney $U$ test ($p$-value $< 10^{-5}$)
Step19: Check the robustness of the difference in variance in the time-series in Figure SI-1(b)
Below we run simulations with the same parameters and starting condition as in Figure SI-1(b) and record the mean and standard deviation of the time-series.
Run 200 simulations as in Figure SI-1(b)
Running the cell below takes about 21 hours to complete. Either run this cell or skip it to import the results in the section titled Import the results of running 200 simulations.
Step20: Save the results to a CSV file
Step21: Import the results of running 200 simulations
Step22: Analyze the results
First we plot histograms of the standard deviation of the time-series $F(t)$ for the two models. This figure is saved as compare_std_dev_F.pdf in the figures folder.
Step23: Next we group by (r, xi) and then compute the mean and standard deviation of the mean of the time-series.
Step24: The sticky links + preferential attachment model has a variance that is 8.6 times larger
Step25: This 8.6-fold difference amounts to a difference in 14.6 standard deviations
Step26: In a two-sided t-test (using scipy's ttest_ind) that allows for unequal variances in the two populations (because, as found below, the variances are found to be statistically significantly different), we obtain a p-value of 5.3e-251
Step27: We also find that a two-sided Mann-Whitney U test has a very small p-value (1e-67)
Step28: Check normality and different variances
Below we find that the standard deviations of the time-series $F(t)$ (plotted as a histogram above) are normal with p-values 0.06 and 2.6e-5.
Step29: According to the Bartlett test, their variances are different (p-value 2.6e-74), so we reject the null hypothesis that they are drawn from populations with the same variance.
In case the sticky/preferential attachment model's standard deviation of $F(t)$ is not normally distributed, we also use the Levene test with the parameter center set to the 'mean' and to 'median' (to check both).
In all three cases, we get a very small p-value (1e-74, 1e-44, 1e-42, respectively), so we reject the null hypothesis that the variances are the same, and hence in the two-sided t-test above we set the keyword argument equal_var to False.
Step30: Dependencies | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
Explanation: Simulate the model and make Figure SI-1
Imports
First run all of the code in this section to import the necessary packages.
First we load some magic commands:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import seaborn as sns
import time, datetime
import progressbar
import pickle
import os
from itertools import product
from joblib import Parallel, delayed
from scipy import stats
import sys
sys.setrecursionlimit(10000) # to be able to pickle the results of simulations and avoid a RecursionError
Explanation: Next load some standard modules. If you do not have one of these modules (such as progressbar or joblib), then run, for example, !pip install progressbar to install it using pip.
End of explanation
from matplotlib import rc
rc('font', **{'family': 'sans-serif','sans-serif': ['Helvetica']})
rc('text', usetex=True)
rc('axes', **{'titlesize': 10, 'labelsize': 8})
rc('legend', **{'fontsize': 9})
Explanation: Set style parameters for matplotlib figures:
End of explanation
figures_path = os.path.join(os.pardir, 'figures')
if not os.path.exists(figures_path):
os.mkdir(figures_path)
Explanation: Set the path for saving the figures:
End of explanation
import ABM
import EconomySimulator
Explanation: Import the code for simulating the model:
End of explanation
def run_long_run_sim(trial_number, F0, r, xi):
n_agents = 200
beta = .4
n_steps = 5 * n_agents
L = 1
exog_fail = 0.0001
alpha = 0.15
tolerance_std = 0.0
n_steps_detect_fixed_point = 50
return EconomySimulator.simulate_economy_long_run(
n_agents=n_agents, init_fraction_functional=F0,
alpha=alpha, beta=beta, r=r, L=L, xi=xi, exog_fail=exog_fail,
n_steps=n_steps, trial=trial_number,
tolerance_std=tolerance_std, n_steps_detect_fixed_point=n_steps_detect_fixed_point)
try:
long_run_results
except NameError:
long_run_results = None
start_time = time.time()
long_run_results = pd.concat([long_run_results, pd.DataFrame(
Parallel(n_jobs=4)(
delayed(run_long_run_sim)(trial, F0, r, xi)
for trial in range(1000)
for F0 in np.arange(.155, .205, .01)
for r in [1., 2000.]
for xi in [0, 1]
)
)])
end_time = time.time()
print(datetime.timedelta(seconds=(end_time - start_time)))
Explanation: Panel (a): long-run fraction functional as a function of the initial condition
Compute the data (takes about 1.5 hours to run)
The code in the cell below creates a pandas DataFrame called long_run_results. It in long_run_results the dictionary returned by the function EconomySimulator.simulate_economy_long_run. This dictionary contains some measures of the state of the model economy after 1000 production attempts have been simulated.
The function run_long_run_sim sets the parameters of the economy, and the for loop iterates over the initial condition F0 (the initial fraction of functional agents), r in [1, 2000], xi in [0, 1], and a trial index trial in range(1000) (we run 1000 trials for each initial condition).
Warning: This code takes about 1.5 hours to run on a laptop computer. To avoid having to re-run this, run the cell under the section heading Load long_run_results from the hard drive below.
End of explanation
long_run_results.groupby(['r', 'xi', 'n_agents', 'init_F']).size()
Explanation: This confirms that we have 1000 simulations for each quadruple (r, xi, n_agents, init_F).
End of explanation
long_run_results.to_csv(
os.path.join(
'simulated_data',
'long_run_results_n200_alpha0p15_beta0p4_epsilon0p0001.csv'))
Explanation: Save the data to the hard drive as a CSV file
End of explanation
long_run_results = pd.read_csv(
os.path.join(
'simulated_data',
'long_run_results_n200_alpha0p15_beta0p4_epsilon0p0001.csv'),
index_col=0)
Explanation: Load long_run_results from the hard drive
Run the code below to load the results from the CSV file in order to avoid having to re-run the simulations above (which takes about 90 minutes):
End of explanation
sim_N1000_alpha0p15_beta0p4_r1_xi0_eps0p0001_initF0p7 = EconomySimulator.AssortativitySimulator(
ABM.Economy(1000, .7, alpha=.15, beta=.4, r=1, exog_fail=0.0001, xi=0))
Explanation: Panel (b): show two representative time-series
Simulate the original model and the model with sticky links and preferential attachment
Either
run the simulations below (which should take around 50 minutes to run), or
load the results of those simulations that were pickled (scroll down to the heading Load the simulations from the pickle file sim_N1000_alpha0p15_beta0p4eps0p0001_initF0p7.pkl).
Simulate the original model
Set up the simulation:
End of explanation
sim_N1000_alpha0p15_beta0p4_r1_xi0_eps0p0001_initF0p7.simulate(200000)
Explanation: This takes about 22 minutes to run:
End of explanation
sim_N1000_alpha0p15_beta0p4_r1_xi0_eps0p0001_initF0p7.combined_plot()
Explanation: Plot some time-series from the simulation:
End of explanation
sim_N1000_alpha0p15_beta0p4_r2000_xi1_eps0p0001_initF0p7 = EconomySimulator.AssortativitySimulator(
ABM.Economy(1000, .7, alpha=.15, beta=.4, r=2000., exog_fail=0.0001, xi=1))
Explanation: Simulate the model with sticky links and preferential attachment
Set up the simulation:
End of explanation
sim_N1000_alpha0p15_beta0p4_r2000_xi1_eps0p0001_initF0p7.simulate(200000)
Explanation: This takes about 25 minutes to run:
End of explanation
sim_N1000_alpha0p15_beta0p4_r2000_xi1_eps0p0001_initF0p7.combined_plot()
Explanation: Plot some time-series from the simulation:
End of explanation
with open(os.path.join('simulated_data', 'sim_N1000_alpha0p15_beta0p4_eps0p0001_initF0p7_r1_xi0.pkl'), 'wb') as f:
pickle.dump(sim_N1000_alpha0p15_beta0p4_r1_xi0_eps0p0001_initF0p7, f)
with open(os.path.join('simulated_data', 'sim_N1000_alpha0p15_beta0p4_eps0p0001_initF0p7_r2000_xi1.pkl'), 'wb') as f:
pickle.dump(sim_N1000_alpha0p15_beta0p4_r2000_xi1_eps0p0001_initF0p7, f)
Explanation: Save and load the two simulations above using pickle
Save (pickle) the simulations to file sim_N1000_alpha0p15_beta0p4eps0p0001_initF0p7.pkl:
End of explanation
with open(os.path.join('simulated_data', 'sim_N1000_alpha0p15_beta0p4_eps0p0001_initF0p7_r1_xi0.pkl'), 'rb') as f:
sim_N1000_alpha0p15_beta0p4_r1_xi0_eps0p0001_initF0p7 = pickle.load(f)
with open(os.path.join('simulated_data', 'sim_N1000_alpha0p15_beta0p4_eps0p0001_initF0p7_r2000_xi1.pkl'), 'rb') as f:
sim_N1000_alpha0p15_beta0p4_r2000_xi1_eps0p0001_initF0p7 = pickle.load(f)
Explanation: Load the simulations from the pickle file sim_N1000_alpha0p15_beta0p4eps0p0001_initF0p7.pkl:
Run the code below to avoid having to run the two simulations above:
End of explanation
data = long_run_results
data.init_F = np.round(data.init_F, 3)
data = data[((data.r == 1) & (data.xi == 0)) | ((data.r > 1) & (data.xi > 0))]
grouped_by_r_xi = data.groupby(['r', 'xi'])
fig, ax = plt.subplots(ncols=2, figsize=(3.4 * 2 * .95, 3.4 / 5 * 3))
colors = ['#2ca02c', '#e377c2']
handles = []
labels = []
indx = 0
for r_xi, r_df in grouped_by_r_xi:
color = colors[indx]
indx += 1
labels.append(r_xi)
linestyle = {0: '-', 1: '--'}.get(r_xi[1])
data_final_F = (
r_df.groupby('init_F')['final_F']
.agg({
'mean_final_F': np.mean,
'std_final_F': np.std,
'num_trials': 'size',
'sem_final_F': lambda final_F: np.std(final_F) / len(final_F)**.5,
'75_percentile_final_F': lambda final_F: np.percentile(final_F, 75.),
'25_percentile_final_F': lambda final_F: np.percentile(final_F, 25.)}))
handle, = ax[0].plot(data_final_F.index, data_final_F.mean_final_F, label=str(r_xi),
color=color, alpha=1, linewidth=1,
linestyle='-')
ax[0].errorbar(data_final_F.index, data_final_F.mean_final_F,
yerr=2 * data_final_F.sem_final_F,
label=str(r_xi),
color=color)
handles.append(handle)
ax[0].set_xlabel(r'$F(0) \equiv$ initial fraction functional')
ax[0].set_ylabel(r'mean of $F(1000)$')
ax[0].set_ylim(0, 1)
xlim = (0.14 - .001, .201)
ax[0].set_xlim(*xlim)
height_trap_label = .01
label_size = 8
ax[0].annotate(
"",
xy=(xlim[0], height_trap_label),
xytext=(.15, height_trap_label),
arrowprops=dict(linewidth=1, headwidth=3, headlength=2, width=0.25))
ax[0].text(xlim[0] * .65 + .15 * .35, height_trap_label + .04, 'trap',
color='k', size=label_size)
height_bimodal_label = height_trap_label
ax[0].annotate(
"",
xy=(.152, height_bimodal_label),
xytext=(.185, height_bimodal_label),
arrowprops=dict(linewidth=1, headwidth=3, headlength=2, width=0.25))
ax[0].annotate(
"",
xytext=(.152, height_bimodal_label),
xy=(.185, height_bimodal_label),
arrowprops=dict(linewidth=1, headwidth=3, headlength=2, width=0.25))
ax[0].text(.152 * .65 + .185 * .35, height_bimodal_label + .04, 'bimodal', color='k', size=label_size)
ax[0].annotate(
'original model'
#'\n'
#r'$(r, \xi) = (1, 0)$'
,
size=label_size,
xy=(.1725, .56),
xytext=(.17, .30),
xycoords='data',
textcoords='data',
arrowprops=dict(arrowstyle="-|>", linewidth=1, connectionstyle="arc3,rad=.2"))
ax[0].annotate(
'sticky links'
#r' ($r = 2000$)'
' and'
'\n'
'prefential attachment'
#r' ($\xi = 1$)'
,
size=label_size,
xy=(.1625, .5),
xytext=(.145, .74),
xycoords='data',
textcoords='data',
arrowprops=dict(arrowstyle="-|>", linewidth=1, connectionstyle="arc3,rad=.2"))
sims = [
sim_N1000_alpha0p15_beta0p4_r1_xi0_eps0p0001_initF0p7,
sim_N1000_alpha0p15_beta0p4_r2000_xi1_eps0p0001_initF0p7
]
for indx, sim in enumerate(sims):
ax[1].plot(sim.fraction_functional_history,
alpha=.8,
color=colors[indx], linewidth=1)
ax[1].set_ylabel(r'$F(t)$')
ax[1].set_xlabel(r'time $t$ (number of production attempts)')
ax[1].set_xlim(0, sims[0].economy.n_production_attempts)
ax[1].set_ylim(0, 1)
ax[1].set_xticks([0, 10**5, 2 * 10**5], ['0', '10^5', '2 10^5'])
ax[1].tick_params(axis='both', labelsize=7, colors='.4')
ax[0].tick_params(axis='both', labelsize=7, colors='.4')
def format_label(value, pos):
return {
0: '0',
2.5 * 10**4: '',#r'$2.5\!\!\times\!\!10^4$',
5 * 10**4: r'$5\!\!\times\!\!10^4$',
10**5: r'$10^5$',
1.5 * 10**5: r'$1.5\!\!\times\!\!10^5$',
2*10**5: r'$2\!\!\times\!\!10^5$'
}.get(value, '')
ax[1].xaxis.set_major_formatter(mpl.ticker.FuncFormatter(format_label))
fig.text(.001, .94, r'\textbf{(a)}', size=label_size)
fig.text(#.49,
.50,
.94, r'\textbf{(b)}', size=label_size)
fig.tight_layout(pad=0.15)
fig.subplots_adjust(wspace=.25)
fig.savefig(os.path.join(figures_path, 'figure_SI_1.pdf'))
plt.show()
Explanation: Make Figure SI-1
The cell below makes Figure SI-1 and saves it to the folder figures as a PDF.
End of explanation
for init_F, df in long_run_results.groupby('init_F'):
df_grouped_by_r_xi = df.groupby(['r', 'xi'])
print('F(0) = {:>5}'.format(init_F), end='\n\t')
original_final_F = df_grouped_by_r_xi.get_group((1, 0))['final_F']
sticky_PA_final_F = df_grouped_by_r_xi.get_group((2000, 1))['final_F']
print('mean F(1000) for original model: {:>5.3f}'.format(original_final_F.mean()), end='\n\t')
print('mean F(1000) for sticky/PA model: {:>5.3f}'.format(sticky_PA_final_F.mean()), end='\n\t')
mann_whitney_test = stats.mannwhitneyu(sticky_PA_final_F, original_final_F, alternative='two-sided')
print('Mann-Whitney U test:')
print('\t\tp-value: ', mann_whitney_test.pvalue, end=' ')
if mann_whitney_test.pvalue < 10**(-3):
print('*' * 3)
else:
print('')
print('\t\tU = ', mann_whitney_test.statistic, end=' ')
print('\n')
Explanation: Check statistical significance of the difference in means in Figure SI-1(a)
In the cell below, we find that the means of $F(1000)$ are statistically significantly different between the two models for $F(0) = 0.155, 0.16, 0.165, ..., 0.2$ according to the two-sided Mann-Whitney $U$ test ($p$-value $< 10^{-5}$):
End of explanation
parameters = product(range(200), ((1, 0), (2000, 1)))
def simulate_long_run_variance(trial_number, r, xi):
n_agents = 1000
beta = .4
n_steps = 200 * n_agents
L = 1
F0 = 0.7
exog_fail = 0.0001
alpha = 0.15
econ = ABM.Economy(
n_agents, F0, alpha=alpha, beta=beta, r=r, exog_fail=exog_fail, xi=xi)
frac_functional_history = []
init_best_response = econ.latest_best_response
result = {
'init_n_inputs_needed': init_best_response.n_inputs_needed,
'init_n_inputs_attempted': init_best_response.n_inputs_attempted}
for i in range(n_steps):
econ.update_one_step()
frac_functional_history.append(econ.fraction_functional_agents())
final_best_response = econ.latest_best_response
result.update({
'final_n_inputs_needed': final_best_response.n_inputs_needed,
'final_n_inputs_attempted': final_best_response.n_inputs_attempted,
'final_F': econ.fraction_functional_agents(),
'n_agents': n_agents, 'init_F': F0, 'alpha': alpha, 'beta': beta, 'xi': xi,
'r': r, 'L': L, 'n_steps': n_steps,
'mean_F': np.mean(frac_functional_history),
'std_F': np.std(frac_functional_history),
'max_F': np.max(frac_functional_history),
'min_F': np.min(frac_functional_history)})
buffers = {
'init_buffer': (result['init_n_inputs_attempted'] -
result['init_n_inputs_needed']),
'final_buffer': (result['final_n_inputs_attempted'] -
result['final_n_inputs_needed'])}
result.update(buffers)
return result
try:
long_run_variance_simulations
except NameError:
long_run_variance_simulations = None
if __name__ == '__main__':
bar = progressbar.ProgressBar()
long_run_variance_simulations = pd.concat([long_run_variance_simulations, pd.DataFrame(
Parallel(n_jobs=4)(
delayed(simulate_long_run_variance)(trial, r, xi)
for trial, (r, xi) in bar(list(parameters))
)
)])
Explanation: Check the robustness of the difference in variance in the time-series in Figure SI-1(b)
Below we run simulations with the same parameters and starting condition as in Figure SI-1(b) and record the mean and standard deviation of the time-series.
Run 200 simulations as in Figure SI-1(b)
Running the cell below takes about 21 hours to complete. Either run this cell or skip it to import the results in the section titled Import the results of running 200 simulations.
End of explanation
long_run_variance_simulations.to_csv(
os.path.join(
'simulated_data',
'long_run_variance_simulations_n1000_alpha0p15_beta0p4_eps0p0001_initF0p7.csv'))
Explanation: Save the results to a CSV file:
End of explanation
long_run_variance_simulations = pd.read_csv(
os.path.join(
'simulated_data',
'long_run_variance_simulations_n1000_alpha0p15_beta0p4_eps0p0001_initF0p7.csv'),
index_col=0)
Explanation: Import the results of running 200 simulations
End of explanation
colors = {(1, 0): '#2ca02c', (2000, 1): '#e377c2'}
fig, ax = plt.subplots(figsize=(3.4, 3.4 / 5 * 3))
grouped_std_F = long_run_variance_simulations.groupby(['r', 'xi'])['std_F']
for r_xi, df in grouped_std_F:
ax.hist(df, bins=30, normed=False, color=colors[r_xi])
ax.set_xlabel('standard deviation of $F(t)$', size=12)
ax.set_ylabel('count', size=12)
ax.annotate(
'original model\n'
r'$(r, \xi) = (1, 0)$',
xy=(.02, 5), xytext=(.05, 5), xycoords='data', textcoords='data',
arrowprops=dict(arrowstyle="-|>", linewidth=1, connectionstyle="arc3,rad=.2"))
ax.annotate(
'sticky links \& preferential \nattachment\n'
r'$(r, \xi) = (2000, 1)$',
xy=(.14, 8), xytext=(.06, 12), xycoords='data', textcoords='data',
arrowprops=dict(arrowstyle="-|>", linewidth=1, connectionstyle="arc3,rad=.2"))
fig.tight_layout(pad=.15)
fig.savefig(os.path.join(figures_path, 'compare_std_dev_F.pdf'))
plt.show()
Explanation: Analyze the results
First we plot histograms of the standard deviation of the time-series $F(t)$ for the two models. This figure is saved as compare_std_dev_F.pdf in the figures folder.
End of explanation
compare_std_F = long_run_variance_simulations.groupby(['r', 'xi']).std_F.agg(
{'mean_std_F': 'mean', 'std_std_F': 'std', 'count': 'size'})
compare_std_F
Explanation: Next we group by (r, xi) and then compute the mean and standard deviation of the mean of the time-series.
End of explanation
compare_std_F.loc[(2000, 1)].mean_std_F / compare_std_F.loc[(1, 0)].mean_std_F
Explanation: The sticky links + preferential attachment model has a variance that is 8.6 times larger:
End of explanation
((compare_std_F.loc[(2000, 1)].mean_std_F - compare_std_F.loc[(1, 0)].mean_std_F) /
compare_std_F.loc[(2000, 1)].std_std_F)
Explanation: This 8.6-fold difference amounts to a difference in 14.6 standard deviations:
End of explanation
std_F_sticky_PA = long_run_variance_simulations.groupby(['r', 'xi']).get_group((2000, 1)).std_F
std_F_original_model = long_run_variance_simulations.groupby(['r', 'xi']).get_group((1, 0)).std_F
print('two-sided t-test: ', stats.ttest_ind(std_F_sticky_PA, std_F_original_model, equal_var = False))
Explanation: In a two-sided t-test (using scipy's ttest_ind) that allows for unequal variances in the two populations (because, as found below, the variances are found to be statistically significantly different), we obtain a p-value of 5.3e-251:
End of explanation
stats.mannwhitneyu(std_F_sticky_PA, std_F_original_model, alternative='two-sided')
Explanation: We also find that a two-sided Mann-Whitney U test has a very small p-value (1e-67):
End of explanation
print('standard deviation of the time-series F(t) in the sticky links + preferential attachment model (r, xi) = (2000, 1)')
print('-' * 114)
print(' variance: ', np.var(std_F_sticky_PA))
print(' normality test: ', stats.normaltest(std_F_sticky_PA), end='\n' * 3)
print('standard deviation of the time-series F(t) in the original model (r, xi) = (1, 0)')
print('-' * 81)
print(' variance: ', np.var(std_F_original_model))
print(' normality test: ', stats.normaltest(std_F_original_model))
Explanation: Check normality and different variances
Below we find that the standard deviations of the time-series $F(t)$ (plotted as a histogram above) are normal with p-values 0.06 and 2.6e-5.
End of explanation
print('Bartlett test (null hypothesis: equal variance; used for normal data):\n\t',
stats.bartlett(std_F_sticky_PA, std_F_original_model), end='\n\n')
print('Levene test with center=mean (null hypothesis: equal variance; used for potentially non-normal data)\n\t',
stats.levene(std_F_sticky_PA, std_F_original_model, center='mean'), end='\n\n')
print('Levene test with center=mean (null hypothesis: equal variance; used for potentially non-normal data)\n\t',
stats.levene(std_F_sticky_PA, std_F_original_model, center='median'))
Explanation: According to the Bartlett test, their variances are different (p-value 2.6e-74), so we reject the null hypothesis that they are drawn from populations with the same variance.
In case the sticky/preferential attachment model's standard deviation of $F(t)$ is not normally distributed, we also use the Levene test with the parameter center set to the 'mean' and to 'median' (to check both).
In all three cases, we get a very small p-value (1e-74, 1e-44, 1e-42, respectively), so we reject the null hypothesis that the variances are the same, and hence in the two-sided t-test above we set the keyword argument equal_var to False.
End of explanation
import sys
sys.version
import joblib
for pkg in [mpl, pd, sns, np, progressbar, joblib]:
print(pkg.__name__, pkg.__version__)
Explanation: Dependencies
End of explanation |
786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hazard Curves and Uniform Hazard Spectra
This IPython notebook allows the user to visualise the hazard curves for individual sites generated from a probabilistic event-based hazard analysis or a classical PSHA-based hazard analysis, and to export the plots as png files. The user can also plot the uniform hazard spectra (UHS) for different sites.
Please specify the path of the xml file containing the hazard curve or uniform hazard spectra results in order to use the hazard curve plotter or the uniform hazard spectra plotter respectively.
Step1: Hazard Curve
Step2: Uniform Hazard Spectra | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from plot_hazard_outputs import HazardCurve, UniformHazardSpectra
hazard_curve_file = "../sample_outputs/hazard/hazard_curve.xml"
hazard_curves = HazardCurve(hazard_curve_file)
Explanation: Hazard Curves and Uniform Hazard Spectra
This IPython notebook allows the user to visualise the hazard curves for individual sites generated from a probabilistic event-based hazard analysis or a classical PSHA-based hazard analysis, and to export the plots as png files. The user can also plot the uniform hazard spectra (UHS) for different sites.
Please specify the path of the xml file containing the hazard curve or uniform hazard spectra results in order to use the hazard curve plotter or the uniform hazard spectra plotter respectively.
End of explanation
hazard_curves.loc_list
hazard_curves.plot('80.763820|29.986170')
Explanation: Hazard Curve
End of explanation
uhs_file = "../sample_outputs/hazard/uniform_hazard_spectra.xml"
uhs = UniformHazardSpectra(uhs_file)
uhs.plot(0)
Explanation: Uniform Hazard Spectra
End of explanation |
787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary of Available Sensorimotor and Interest Models
In this notebook, we summarize the different sensorimotor and interest models available in the Explauto library, and give some explanations or references. We suppose that the reader is familiar with the main components of the Explauto library explained in another notebook (full tutorial)
Step2: Sensorimotor models
In Explauto, a sensorimotor model implements both the iterative learning process from sensorimotor experience, i.e. from the iterative collection of $(m, s)$ pairs by interaction with the environment, and the use of the resulting internal model to perform forward and inverse predictions (or any kind of general prediction between sensorimotor subspaces).
Learning sensorimotor mappings involves machine learning algorithms, for which Explauto provides a unified interface through the SensorimotorModel abstract class.
Using the simple arm environment above, it allows to iteratively learn a sensorimotor model which will be able to
Step3: Forward models for Non-Stationnary environments
'NSNN' and 'NSLWLR' are modified versions of 'NN' and 'LWLR' where points are not only weighted by distance but also by the number of points that appeared after that one (gaussian with parameter sigma_t=100), to put less weight on old points and allow the learning of Non-Stationnary environments.
Inverse Models
Inverse models infer a motor command $m$ that should be able to reach a given goal $s_g$.
NN Inverse Model
To perform the inverse inference, the Nearest Neighbor inverse model just look in the dataset of tuples $(m, s)$, the nearest neighbor of the given $s$ motor command, and return its corresponding $m$.
WNN Inverse Model
Typical robotic forward models are very redundant
Step4: For each combination, we can use one of the possible configurations (use available_configurations to find them), or we can define our own configuration. See the following for some exemples.
'nearest_neighbor'
Step5: We also can specify the parameters by hand
Step6: 'WNN'
Step7: 'LWLR-BFGS'
Step8: 'LWLR-CMAES'
Step9: Simple test
Choose a sensorimotor model and try the following test.
Step10: Interest models
In Explauto, the role of interest models is to provide sensorimotor predictions (forward or inverse) to be performed by the sensorimotor model. An interest model implements the active exploration process, where sensorimotor experiments are chosen to improve the forward or inverse predictions of the sensorimotor model. It explores in a given interest space resulting in motor babbling strategies when it corresponds to the motor space and in goal babbling strategies when it corresponds to the sensory space.
An interest model has to implement a sampling procedure in the interest space. Explauto provides several sampling procedures
Step11: We can get the default parameters of one of the algorithms with available_configuration
Step12: 'random'
The random interest model just draw random goals in the interest space.
Step13: 'discretized_progress'
The 'discretized_progress' interest model is based on the computation of the interest as the absolute derivative of the competence in each region of a fixed discretization of the interest space. 'x_card' is the total number of cells in the discretization. 'win_size' is the window size of the interest computation which is based on the last 'win_size' points.
Step14: 'tree'
See Baranes2012 for a presentation of the SAGG-RIAC algorithm. We re-implemented the algorithm here in python, with several implementation options.
The main idea is to adapt the discretization to the dataset distribution. At each iteration, if there is to much point in a region, that region is splitted in 2 subregions (along the next axis in a kdtree-like way), choosing the value of the split in order to best discriminate the interest of the 2 subregions.
Here are the options
Step15: 'gmm_progress_beta'
This model computes a gaussian mixture model that represents at the same time the space of interest, the competence, and time (thus a mixture in $S\times C \times T$ space). To sample in an interesting region of S, the algorithm weights the gaussian components based on their covariance between $C$ and $T$, giving positive weight to a component if the competence increases with time in that region of $S$.
See Moulin-Frier2013, page 9, for illustrations of this process. | Python Code:
from explauto.environment.environment import Environment
environment = Environment.from_configuration('simple_arm', 'mid_dimensional')
Explanation: Summary of Available Sensorimotor and Interest Models
In this notebook, we summarize the different sensorimotor and interest models available in the Explauto library, and give some explanations or references. We suppose that the reader is familiar with the main components of the Explauto library explained in another notebook (full tutorial): the environment, the sensorimotor model and the interest model.
Let's begin with defining a simple environment that will be used to test the sensorimotor models.
End of explanation
Input D problem dimension
Input X matrix of inputs: X[k][i] = i’th component of k’th input point.
Input Y matrix of outputs: Y[k] = k’th output value.
Input xq = query input. Input kwidth.
WXTWX = empty (D+1) x (D+1) matrix
WXTWY = empty (D+1) x 1 matrix
for ( k = 0 ; i <= N - 1 ; i = i + 1 )
# Compute weight of kth point
wk = weight_function( distance( xq , X[k] ) / kwidth )
/* Add to (WX) ^T (WX) matrix */
for ( i = 0 ; i <= D ; i = i + 1 )
for ( j = 0 ; j <= D ; j = j + 1 )
if ( i == 0 )
xki = 1 else xki = X[k] [i]
if ( j == 0 )
xkj = 1 else xkj = X[k] [j]
WXTWX [i] [j] = WXTWX [i] [j] + wk * wk * xki * xkj
/* Add to (WX) ^T (WY) vector */
for ( i = 0 ; i <= D ; i = i + 1 )
if ( i == 0 )
xki = 1 else xki = X[k] [i]
WXTWY [i] = WXTWY [i] + wk * wk * xki * Y[k]
/* Compute the local beta. Call your favorite linear equation solver.
Recommend Cholesky Decomposition for speed.
Recommend Singular Val Decomp for Robustness. */
Beta = (WXTWX)^{-1}(WXTWY)
Output ypredict = beta[0] + beta[1]*xq[1] + beta[2]*xq[2] + … beta[D]*x q[D]
Explanation: Sensorimotor models
In Explauto, a sensorimotor model implements both the iterative learning process from sensorimotor experience, i.e. from the iterative collection of $(m, s)$ pairs by interaction with the environment, and the use of the resulting internal model to perform forward and inverse predictions (or any kind of general prediction between sensorimotor subspaces).
Learning sensorimotor mappings involves machine learning algorithms, for which Explauto provides a unified interface through the SensorimotorModel abstract class.
Using the simple arm environment above, it allows to iteratively learn a sensorimotor model which will be able to:
* infer the position of the end-effector from a given motor command, what is called forward prediction,
* infer the motor command allowing to reach a particular end-effector position, what is called inverse prediction.
* update online from sensorimotor experience
Several sensorimotor models are provided: simple nearest-neighbor look-up, non-parametric models combining classical regressions and optimization algorithms, online local mixtures of Gaussians (beta). Here we will only explain non-parametric models.
Non-parametric models can be decomposed into a dataset, a forward model, and an inverse model.
The dataset just stores all the experiments (m, s) into a list.
The forward model uses the dataset for the forward prediction computation, and the inverse model uses the forward model, or directly the dataset to perform inverse prediction.
All the non-parametric sensorimotor models have two operating modes: "explore" and "exploit".
In the "explore" mode, when the agent asks for the exact inverse prediction $m$ of a goal $s_g$, $m$ will be perturbated with some gaussian exploration noise in order to allow the agent to explore new motor commands. The sensorimotor models thus have a common parameter: sigma_explo_ratio=0.1 (default), which is the standard deviation of the gaussian noise, scaled depending of the motor domain size: if a motor value is bounded in [-2:2], then a sigma_explo_ratio of 0.1 will induce an exploration noise of (m_max - m_min) * sigma_explo_ratio = 0.4
In the "exploit" mode, no exploration noise is added. This mode is used for instance when evaluating the inverse model for comparison purposes.
Forward Models:
Forward models predict $s_p$ given a $m$ that might have never been observed, using the dataset of observations $(m,s)$.
NN Forward model
To perform a forward prediction, the Nearest Neighbor model just look in the dataset of tuples $(m, s)$, the nearest neighbor of the given $m$ motor command, and return its corresponding $s$.
This forward model is very fast (up to datasets of size $10^5$), and makes no assumptions about the regularity of the model being learned (continuity, linearity, ...). It works sufficiently well in different typical robotic applications.
WNN Forward model
To perform a forward prediction of $m$, the Weighted Nearest Neighbor model looks at the $k$ (parameter) nearest neighbors of $m$ in the dataset, and returns the average of the $k$ corresponding $s$. This average is weighted by the distance to $m$ with a gaussian of standard deviation $\sigma$ (parameter).
See k-nearest neighbors algorithm.
LWLR Forward model
The Locally Weigthed Linear Regression (LWLR) computes a linear regression of the $k$ nearest neighbors of $m$ (thus a local regression), and find the requested $s$ with the given $m$ based on that regression.
References :
1. https://en.wikipedia.org/wiki/Local_regression
2. C. G. Atkeson, A. W. Moore, S. Schaal, "Locally Weighted Learning for Control", "Springer Netherlands", 75-117, vol 11, issue 1, 1997/02, 10.1023/A:1006511328852
3. See also a video lecture on LWR.
Pseudo Code :
End of explanation
from explauto.sensorimotor_model import sensorimotor_models, available_configurations
sensorimotor_models.keys()
Explanation: Forward models for Non-Stationnary environments
'NSNN' and 'NSLWLR' are modified versions of 'NN' and 'LWLR' where points are not only weighted by distance but also by the number of points that appeared after that one (gaussian with parameter sigma_t=100), to put less weight on old points and allow the learning of Non-Stationnary environments.
Inverse Models
Inverse models infer a motor command $m$ that should be able to reach a given goal $s_g$.
NN Inverse Model
To perform the inverse inference, the Nearest Neighbor inverse model just look in the dataset of tuples $(m, s)$, the nearest neighbor of the given $s$ motor command, and return its corresponding $m$.
WNN Inverse Model
Typical robotic forward models are very redundant: e.g. a robotic arm can put its hand to position $s$ with an infinity of possible $m$ motor positions.
Thus, trying to infer a motor command $m$ to reach a given goal $s$ doing an average of the nearest neighbors of $s$ in the dataset would make no sense as those nearest neighbors might have very different corresponding motor commands.
To perform the inverse inference of a given $s$, the Weighted Nearest Neighbor model looks at the nearest neighbor of $s$ in the dataset and gets its corresponding $m$. It finds now the $k$ (parameter) nearest neighbors of $m$ in the dataset, and returns their average weighted by the distance of their sensory part to $s$, with a gaussian of standard deviation $\sigma$ (parameter).
See code here.
Optimization Inverse model
Another possibility to perform inverse inference is to use an optimization algorithm to minimize the error $e(x) = ||f(x) - y_g||^2$ where $y_g$ is the goal, $f$ is the forward model, and $x$ is the motor command to be infered.
This is how our scipy.optimize based inverse models do.
The adapted ones are 'COBYLA' (wikipedia), 'BFGS' and 'L-BFGS-B' (wikipedia).
They take a 'maxfun' (BFGS) or 'maxiter' (COBYLA) parameter that limits the number of error function (and so forward model) evaluation.
'CMAES' Inverse model (Covariance Matrix Adaptation - Evolutionary Strategy) also optimizes that error function but makes fewer assumptions on the regularity of the forward model to perform the search. It is based on a random exploration (with a computed covariance) around a current point of interest, and adapts this point and recompute the covariance matrix at each iteration, with memory of the taken path.
The initial point is set as the motor part $m$ of the nearest neighbor $s$ of the goal $s_g$, and the initial covariance matrix is identity times an exploration $\sigma$ (parameter). This inverse model also takes a 'maxfevals' parameter that limits the number of forward model evaluations.
See Hansen's website and this tutorial on CMA-ES.
Combinations of one forward and one inverse model: the sensorimotor model
Combinations of a forward and an inverse model can be instanciated using 'fwd' and 'inv' options.
Possible 'fwd': 'NN', 'WNN', 'LWLR', 'NSNN', 'NSLWLR'
Possible 'inv': 'NN', 'WNN', 'BFGS', 'L-BFGS-B', 'COBYLA', 'CMAES', 'Jacobian'
Here are the already provided combinations:
End of explanation
available_configurations('nearest_neighbor')
from explauto.sensorimotor_model.sensorimotor_model import SensorimotorModel
sm_model = SensorimotorModel.from_configuration(environment.conf, "nearest_neighbor", "default")
Explanation: For each combination, we can use one of the possible configurations (use available_configurations to find them), or we can define our own configuration. See the following for some exemples.
'nearest_neighbor'
End of explanation
from explauto.sensorimotor_model.non_parametric import NonParametric
params = {'fwd': 'NN', 'inv': 'NN', 'sigma_explo_ratio':0.1}
sm_model = NonParametric(environment.conf, **params)
Explanation: We also can specify the parameters by hand:
End of explanation
params = {'fwd': 'WNN', 'inv': 'WNN', 'k':20, 'sigma':0.1, 'sigma_explo_ratio':0.1}
sm_model = NonParametric(environment.conf, **params)
Explanation: 'WNN'
End of explanation
params = {'fwd': 'LWLR', 'k':10, 'inv': 'L-BFGS-B', 'maxfun':50}
sm_model = NonParametric(environment.conf, **params)
Explanation: 'LWLR-BFGS'
End of explanation
params = {'fwd': 'LWLR', 'k':10, 'inv': 'CMAES', 'cmaes_sigma':0.05, 'maxfevals':20}
sm_model = NonParametric(environment.conf, **params)
Explanation: 'LWLR-CMAES'
End of explanation
%pylab inline
for m in environment.random_motors(n=1000):
# compute the sensori effect s of the motor command m through the environment:
s = environment.compute_sensori_effect(m)
# update the model according to this experience:
sm_model.update(m, s)
sm_model.mode = "exploit"
s_g = [0.7, 0.5]
m = sm_model.inverse_prediction(s_g)
print 'Inferred motor command to reach the position ', s_g, ': ', m
ax = axes()
environment.plot_arm(ax, m)
ax.plot(*s_g, marker='o', color='red')
Explanation: Simple test
Choose a sensorimotor model and try the following test.
End of explanation
from explauto.interest_model import interest_models, available_configurations
interest_models.keys()
Explanation: Interest models
In Explauto, the role of interest models is to provide sensorimotor predictions (forward or inverse) to be performed by the sensorimotor model. An interest model implements the active exploration process, where sensorimotor experiments are chosen to improve the forward or inverse predictions of the sensorimotor model. It explores in a given interest space resulting in motor babbling strategies when it corresponds to the motor space and in goal babbling strategies when it corresponds to the sensory space.
An interest model has to implement a sampling procedure in the interest space. Explauto provides several sampling procedures:
* random sampling ('random'),
* learning progress maximization in forward or inverse predictions, with a fixed discretization of the interest space ('discretized_progress'),
* learning progress maximization in forward or inverse predictions, with an adating discretization of the interest space ('tree').
At each iteration, a goal is selected by the interest model, the sensorimotor model tries to reach that goal, and the distance between the actual reached point and the goal serves to compute the competence on that goal.
See this notebook for a comparison of 'random', 'discretized_progress' and 'tree' interest models.
End of explanation
available_configurations('discretized_progress')
Explanation: We can get the default parameters of one of the algorithms with available_configuration:
End of explanation
from explauto import InterestModel
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'random')
Explanation: 'random'
The random interest model just draw random goals in the interest space.
End of explanation
from explauto.interest_model.discrete_progress import DiscretizedProgress, competence_dist
im_model = DiscretizedProgress(environment.conf, environment.conf.s_dims, **{'x_card': 1000,
'win_size': 10,
'measure': competence_dist})
Explanation: 'discretized_progress'
The 'discretized_progress' interest model is based on the computation of the interest as the absolute derivative of the competence in each region of a fixed discretization of the interest space. 'x_card' is the total number of cells in the discretization. 'win_size' is the window size of the interest computation which is based on the last 'win_size' points.
End of explanation
from explauto.interest_model.tree import InterestTree, competence_exp
im_model = InterestTree(environment.conf, environment.conf.s_dims, **{'max_points_per_region': 100,
'max_depth': 20,
'split_mode': 'best_interest_diff',
'competence_measure': lambda target,reached : competence_exp(target, reached, 0., 10.),
'progress_win_size': 50,
'progress_measure': 'abs_deriv_smooth',
'sampling_mode': {'mode':'softmax',
'param':0.2,
'multiscale':False,
'volume':True}})
Explanation: 'tree'
See Baranes2012 for a presentation of the SAGG-RIAC algorithm. We re-implemented the algorithm here in python, with several implementation options.
The main idea is to adapt the discretization to the dataset distribution. At each iteration, if there is to much point in a region, that region is splitted in 2 subregions (along the next axis in a kdtree-like way), choosing the value of the split in order to best discriminate the interest of the 2 subregions.
Here are the options:
max_points_per_region : int:
Maximum number of points per region. A given region is splited when this number is exceeded.
max_depth : int:
Maximum depth of the tree
split_mode : string:
Mode to split a region:
'random': random value between first and last points,
'median': median of the points in the region on the split dimension,
'middle': middle of the region on the split dimension,
'best_interest_diff':
value that maximize the difference of progress in the 2 sub-regions
(described in Baranes2012: Active Learning of Inverse Models
with Intrinsically Motivated Goal Exploration in Robots)
progress_win_size : int:
Number of last points taken into account for progress computation (should be < max_points_per_region)
progress_measure : string:
How to compute progress:
'abs_deriv_cov': approach from explauto's discrete progress interest model
'abs_deriv': absolute difference between first and last points in the window,
'abs_deriv_smooth', absolute difference between first and last half of the window
sampling_mode : list:
How to sample a point in the tree:
dict(multiscale=bool,
volume=bool,
mode=greedy'|'random'|'epsilon_greedy'|'softmax',
param=float)
multiscale: if we choose between all the nodes of the tree to sample a goal, leading to a multi-scale resolution
(described in Baranes2012: Active Learning of Inverse Models
with Intrinsically Motivated Goal Exploration in Robots)
volume: if we weight the progress of nodes with their volume to choose between them
(new approach)
mode: sampling mode
param: a parameter of the sampling mode: eps for eps_greedy, temperature for softmax.
End of explanation
from explauto.interest_model.gmm_progress import GmmInterest, competence_exp
im_model = GmmInterest(environment.conf, environment.conf.s_dims, **{'measure': competence_exp,
'n_samples': 40,
'n_components': 6})
Explanation: 'gmm_progress_beta'
This model computes a gaussian mixture model that represents at the same time the space of interest, the competence, and time (thus a mixture in $S\times C \times T$ space). To sample in an interesting region of S, the algorithm weights the gaussian components based on their covariance between $C$ and $T$, giving positive weight to a component if the competence increases with time in that region of $S$.
See Moulin-Frier2013, page 9, for illustrations of this process.
End of explanation |
788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactions and ANOVA
Note
Step1: Take a look at the data
Step2: Fit a linear model
Step3: Have a look at the created design matrix
Step4: Or since we initially passed in a DataFrame, we have a DataFrame available in
Step5: We keep a reference to the original untouched data in
Step6: Influence statistics
Step7: or get a dataframe
Step8: Now plot the residuals within the groups separately
Step9: Now we will test some interactions using anova or f_test
Step10: Do an ANOVA check
Step11: The design matrix as a DataFrame
Step12: The design matrix as an ndarray
Step13: Looks like one observation is an outlier.
Step14: Replot the residuals
Step15: Plot the fitted values
Step16: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
Step17: Minority Employment Data
Step18: One-way ANOVA
Step19: Two-way ANOVA
Step20: Explore the dataset
Step21: Balanced panel
Step22: You have things available in the calling namespace available in the formula evaluation namespace
Step23: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Do not use Type III with non-orthogonal contrast - ie., Treatment | Python Code:
%matplotlib inline
from urllib.request import urlopen
import numpy as np
np.set_printoptions(precision=4, suppress=True)
import pandas as pd
pd.set_option("display.width", 100)
import matplotlib.pyplot as plt
from statsmodels.formula.api import ols
from statsmodels.graphics.api import interaction_plot, abline_plot
from statsmodels.stats.anova import anova_lm
try:
salary_table = pd.read_csv("salary.table")
except: # recent pandas can read URL without urlopen
url = "http://stats191.stanford.edu/data/salary.table"
fh = urlopen(url)
salary_table = pd.read_table(fh)
salary_table.to_csv("salary.table")
E = salary_table.E
M = salary_table.M
X = salary_table.X
S = salary_table.S
Explanation: Interactions and ANOVA
Note: This script is based heavily on Jonathan Taylor's class notes https://web.stanford.edu/class/stats191/notebooks/Interactions.html
Download and format data:
End of explanation
plt.figure(figsize=(6, 6))
symbols = ["D", "^"]
colors = ["r", "g", "blue"]
factor_groups = salary_table.groupby(["E", "M"])
for values, group in factor_groups:
i, j = values
plt.scatter(group["X"], group["S"], marker=symbols[j], color=colors[i - 1], s=144)
plt.xlabel("Experience")
plt.ylabel("Salary")
Explanation: Take a look at the data:
End of explanation
formula = "S ~ C(E) + C(M) + X"
lm = ols(formula, salary_table).fit()
print(lm.summary())
Explanation: Fit a linear model:
End of explanation
lm.model.exog[:5]
Explanation: Have a look at the created design matrix:
End of explanation
lm.model.data.orig_exog[:5]
Explanation: Or since we initially passed in a DataFrame, we have a DataFrame available in
End of explanation
lm.model.data.frame[:5]
Explanation: We keep a reference to the original untouched data in
End of explanation
infl = lm.get_influence()
print(infl.summary_table())
Explanation: Influence statistics
End of explanation
df_infl = infl.summary_frame()
df_infl[:5]
Explanation: or get a dataframe
End of explanation
resid = lm.resid
plt.figure(figsize=(6, 6))
for values, group in factor_groups:
i, j = values
group_num = i * 2 + j - 1 # for plotting purposes
x = [group_num] * len(group)
plt.scatter(
x,
resid[group.index],
marker=symbols[j],
color=colors[i - 1],
s=144,
edgecolors="black",
)
plt.xlabel("Group")
plt.ylabel("Residuals")
Explanation: Now plot the residuals within the groups separately:
End of explanation
interX_lm = ols("S ~ C(E) * X + C(M)", salary_table).fit()
print(interX_lm.summary())
Explanation: Now we will test some interactions using anova or f_test
End of explanation
from statsmodels.stats.api import anova_lm
table1 = anova_lm(lm, interX_lm)
print(table1)
interM_lm = ols("S ~ X + C(E)*C(M)", data=salary_table).fit()
print(interM_lm.summary())
table2 = anova_lm(lm, interM_lm)
print(table2)
Explanation: Do an ANOVA check
End of explanation
interM_lm.model.data.orig_exog[:5]
Explanation: The design matrix as a DataFrame
End of explanation
interM_lm.model.exog
interM_lm.model.exog_names
infl = interM_lm.get_influence()
resid = infl.resid_studentized_internal
plt.figure(figsize=(6, 6))
for values, group in factor_groups:
i, j = values
idx = group.index
plt.scatter(
X[idx],
resid[idx],
marker=symbols[j],
color=colors[i - 1],
s=144,
edgecolors="black",
)
plt.xlabel("X")
plt.ylabel("standardized resids")
Explanation: The design matrix as an ndarray
End of explanation
drop_idx = abs(resid).argmax()
print(drop_idx) # zero-based index
idx = salary_table.index.drop(drop_idx)
lm32 = ols("S ~ C(E) + X + C(M)", data=salary_table, subset=idx).fit()
print(lm32.summary())
print("\n")
interX_lm32 = ols("S ~ C(E) * X + C(M)", data=salary_table, subset=idx).fit()
print(interX_lm32.summary())
print("\n")
table3 = anova_lm(lm32, interX_lm32)
print(table3)
print("\n")
interM_lm32 = ols("S ~ X + C(E) * C(M)", data=salary_table, subset=idx).fit()
table4 = anova_lm(lm32, interM_lm32)
print(table4)
print("\n")
Explanation: Looks like one observation is an outlier.
End of explanation
resid = interM_lm32.get_influence().summary_frame()["standard_resid"]
plt.figure(figsize=(6, 6))
resid = resid.reindex(X.index)
for values, group in factor_groups:
i, j = values
idx = group.index
plt.scatter(
X.loc[idx],
resid.loc[idx],
marker=symbols[j],
color=colors[i - 1],
s=144,
edgecolors="black",
)
plt.xlabel("X[~[32]]")
plt.ylabel("standardized resids")
Explanation: Replot the residuals
End of explanation
lm_final = ols("S ~ X + C(E)*C(M)", data=salary_table.drop([drop_idx])).fit()
mf = lm_final.model.data.orig_exog
lstyle = ["-", "--"]
plt.figure(figsize=(6, 6))
for values, group in factor_groups:
i, j = values
idx = group.index
plt.scatter(
X[idx],
S[idx],
marker=symbols[j],
color=colors[i - 1],
s=144,
edgecolors="black",
)
# drop NA because there is no idx 32 in the final model
fv = lm_final.fittedvalues.reindex(idx).dropna()
x = mf.X.reindex(idx).dropna()
plt.plot(x, fv, ls=lstyle[j], color=colors[i - 1])
plt.xlabel("Experience")
plt.ylabel("Salary")
Explanation: Plot the fitted values
End of explanation
U = S - X * interX_lm32.params["X"]
plt.figure(figsize=(6, 6))
interaction_plot(
E, M, U, colors=["red", "blue"], markers=["^", "D"], markersize=10, ax=plt.gca()
)
Explanation: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
End of explanation
try:
jobtest_table = pd.read_table("jobtest.table")
except: # do not have data already
url = "http://stats191.stanford.edu/data/jobtest.table"
jobtest_table = pd.read_table(url)
factor_group = jobtest_table.groupby(["MINORITY"])
fig, ax = plt.subplots(figsize=(6, 6))
colors = ["purple", "green"]
markers = ["o", "v"]
for factor, group in factor_group:
ax.scatter(
group["TEST"],
group["JPERF"],
color=colors[factor],
marker=markers[factor],
s=12 ** 2,
)
ax.set_xlabel("TEST")
ax.set_ylabel("JPERF")
min_lm = ols("JPERF ~ TEST", data=jobtest_table).fit()
print(min_lm.summary())
fig, ax = plt.subplots(figsize=(6, 6))
for factor, group in factor_group:
ax.scatter(
group["TEST"],
group["JPERF"],
color=colors[factor],
marker=markers[factor],
s=12 ** 2,
)
ax.set_xlabel("TEST")
ax.set_ylabel("JPERF")
fig = abline_plot(model_results=min_lm, ax=ax)
min_lm2 = ols("JPERF ~ TEST + TEST:MINORITY", data=jobtest_table).fit()
print(min_lm2.summary())
fig, ax = plt.subplots(figsize=(6, 6))
for factor, group in factor_group:
ax.scatter(
group["TEST"],
group["JPERF"],
color=colors[factor],
marker=markers[factor],
s=12 ** 2,
)
fig = abline_plot(
intercept=min_lm2.params["Intercept"],
slope=min_lm2.params["TEST"],
ax=ax,
color="purple",
)
fig = abline_plot(
intercept=min_lm2.params["Intercept"],
slope=min_lm2.params["TEST"] + min_lm2.params["TEST:MINORITY"],
ax=ax,
color="green",
)
min_lm3 = ols("JPERF ~ TEST + MINORITY", data=jobtest_table).fit()
print(min_lm3.summary())
fig, ax = plt.subplots(figsize=(6, 6))
for factor, group in factor_group:
ax.scatter(
group["TEST"],
group["JPERF"],
color=colors[factor],
marker=markers[factor],
s=12 ** 2,
)
fig = abline_plot(
intercept=min_lm3.params["Intercept"],
slope=min_lm3.params["TEST"],
ax=ax,
color="purple",
)
fig = abline_plot(
intercept=min_lm3.params["Intercept"] + min_lm3.params["MINORITY"],
slope=min_lm3.params["TEST"],
ax=ax,
color="green",
)
min_lm4 = ols("JPERF ~ TEST * MINORITY", data=jobtest_table).fit()
print(min_lm4.summary())
fig, ax = plt.subplots(figsize=(8, 6))
for factor, group in factor_group:
ax.scatter(
group["TEST"],
group["JPERF"],
color=colors[factor],
marker=markers[factor],
s=12 ** 2,
)
fig = abline_plot(
intercept=min_lm4.params["Intercept"],
slope=min_lm4.params["TEST"],
ax=ax,
color="purple",
)
fig = abline_plot(
intercept=min_lm4.params["Intercept"] + min_lm4.params["MINORITY"],
slope=min_lm4.params["TEST"] + min_lm4.params["TEST:MINORITY"],
ax=ax,
color="green",
)
# is there any effect of MINORITY on slope or intercept?
table5 = anova_lm(min_lm, min_lm4)
print(table5)
# is there any effect of MINORITY on intercept
table6 = anova_lm(min_lm, min_lm3)
print(table6)
# is there any effect of MINORITY on slope
table7 = anova_lm(min_lm, min_lm2)
print(table7)
# is it just the slope or both?
table8 = anova_lm(min_lm2, min_lm4)
print(table8)
Explanation: Minority Employment Data
End of explanation
try:
rehab_table = pd.read_csv("rehab.table")
except:
url = "http://stats191.stanford.edu/data/rehab.csv"
rehab_table = pd.read_table(url, delimiter=",")
rehab_table.to_csv("rehab.table")
fig, ax = plt.subplots(figsize=(8, 6))
fig = rehab_table.boxplot("Time", "Fitness", ax=ax, grid=False)
rehab_lm = ols("Time ~ C(Fitness)", data=rehab_table).fit()
table9 = anova_lm(rehab_lm)
print(table9)
print(rehab_lm.model.data.orig_exog)
print(rehab_lm.summary())
Explanation: One-way ANOVA
End of explanation
try:
kidney_table = pd.read_table("./kidney.table")
except:
url = "http://stats191.stanford.edu/data/kidney.table"
kidney_table = pd.read_csv(url, delim_whitespace=True)
Explanation: Two-way ANOVA
End of explanation
kidney_table.head(10)
Explanation: Explore the dataset
End of explanation
kt = kidney_table
plt.figure(figsize=(8, 6))
fig = interaction_plot(
kt["Weight"],
kt["Duration"],
np.log(kt["Days"] + 1),
colors=["red", "blue"],
markers=["D", "^"],
ms=10,
ax=plt.gca(),
)
Explanation: Balanced panel
End of explanation
kidney_lm = ols("np.log(Days+1) ~ C(Duration) * C(Weight)", data=kt).fit()
table10 = anova_lm(kidney_lm)
print(
anova_lm(ols("np.log(Days+1) ~ C(Duration) + C(Weight)", data=kt).fit(), kidney_lm)
)
print(
anova_lm(
ols("np.log(Days+1) ~ C(Duration)", data=kt).fit(),
ols("np.log(Days+1) ~ C(Duration) + C(Weight, Sum)", data=kt).fit(),
)
)
print(
anova_lm(
ols("np.log(Days+1) ~ C(Weight)", data=kt).fit(),
ols("np.log(Days+1) ~ C(Duration) + C(Weight, Sum)", data=kt).fit(),
)
)
Explanation: You have things available in the calling namespace available in the formula evaluation namespace
End of explanation
sum_lm = ols("np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)", data=kt).fit()
print(anova_lm(sum_lm))
print(anova_lm(sum_lm, typ=2))
print(anova_lm(sum_lm, typ=3))
nosum_lm = ols(
"np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)", data=kt
).fit()
print(anova_lm(nosum_lm))
print(anova_lm(nosum_lm, typ=2))
print(anova_lm(nosum_lm, typ=3))
Explanation: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Do not use Type III with non-orthogonal contrast - ie., Treatment
End of explanation |
789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Tensorboard in DeepChem
DeepChem Neural Networks models are built on top of tensorflow. Tensorboard is a powerful visualization tool in tensorflow for viewing your model architecture and performance.
In this tutorial we will show how to turn on tensorboard logging for our models, and go show the network architecture for some of our more popular models.
The first thing we have to do is load a dataset that we will monitor model performance over.
Step1: Now we will create our model with tensorboard on. All we have to do to turn tensorboard on is pass the tensorboard=True flag to the constructor of our model
Step2: Viewing the Tensorboard output
When tensorboard is turned on we log all the files needed for tensorboard in model.model_dir. To launch the tensorboard webserver we have to call in a terminal
bash
tensorboard --logdir models/ --port 6006
This will launch the tensorboard web server on your local computer on port 6006. Go to http
Step3: If you click "GRAPHS" at the top you can see a visual layout of the model. Here is what our GraphConvModel Model looks like | Python Code:
from IPython.display import Image, display
import deepchem as dc
from deepchem.molnet import load_tox21
from deepchem.models.graph_models import GraphConvModel
# Load Tox21 dataset
tox21_tasks, tox21_datasets, transformers = load_tox21(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = tox21_datasets
Explanation: Using Tensorboard in DeepChem
DeepChem Neural Networks models are built on top of tensorflow. Tensorboard is a powerful visualization tool in tensorflow for viewing your model architecture and performance.
In this tutorial we will show how to turn on tensorboard logging for our models, and go show the network architecture for some of our more popular models.
The first thing we have to do is load a dataset that we will monitor model performance over.
End of explanation
# Construct the model with tensorbaord on
model = GraphConvModel(len(tox21_tasks), mode='classification', tensorboard=True, model_dir='models')
# Fit the model
model.fit(train_dataset, nb_epoch=10)
Explanation: Now we will create our model with tensorboard on. All we have to do to turn tensorboard on is pass the tensorboard=True flag to the constructor of our model
End of explanation
display(Image(filename='assets/tensorboard_landing.png'))
Explanation: Viewing the Tensorboard output
When tensorboard is turned on we log all the files needed for tensorboard in model.model_dir. To launch the tensorboard webserver we have to call in a terminal
bash
tensorboard --logdir models/ --port 6006
This will launch the tensorboard web server on your local computer on port 6006. Go to http://localhost:6006 in your web browser to look through tensorboard's UI.
The first thing you will see is a graph of the loss vs mini-batches. You can use this data to determine if your model is still improving it's loss function over time or to find out if your gradients are exploding!.
End of explanation
display(Image(filename='assets/GraphConvArch.png'))
Explanation: If you click "GRAPHS" at the top you can see a visual layout of the model. Here is what our GraphConvModel Model looks like
End of explanation |
790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python
(via xkcd)
What is Python?
Python is a modern, open source, object-oriented programming language, created by a Dutch programmer, Guido van Rossum. Officially, it is an interpreted scripting language (meaning that it is not compiled until it is run) for the C programming language; in fact, Python itself is coded in C. Frequently, it is compared to languages like Perl and Ruby. It offers the power and flexibility of lower level (i.e. compiled) languages, without the steep learning curve, and without most of the associated debugging pitfalls. The language is very clean and readable, and it is available for almost every modern computing platform.
Why use Python for scientific programming?
Python offers a number of advantages to scientists, both for experienced and novice programmers alike
Step1: Notice that, rather than using parentheses or brackets to enclose units of code (such as loops or conditional statements), python simply uses indentation. This relieves the programmer from worrying about a stray bracket causing her program to crash. Also, it forces programmers to code in neat blocks, making programs easier to read. So, for the following snippet of code
Step2: The first line initializes a variable to hold the sum, and the second initiates a loop, where each element in the data list is given the name x, and is used in the code that is indented below. The first line of subsequent code that is not indented signifies the end of the loop. It takes some getting used to, but works rather well.
Now lets call the function
Step3: Our specification of mean and var are by no means the most efficient implementations. Python provides some syntax and built-in functions to make things easier, and sometimes faster
Step4: In the new implementation of mean, we use the built-in function sum to reduce the function to a single line. Similarly, var employs a list comprehension syntax to make a more compact and efficient loop.
An alternative looping construct involves the map function. Suppose that we had a number of datasets, for each which we want to calculate the mean
Step5: This can be done using a classical loop
Step6: Or, more succinctly using map
Step7: Similarly we did not have to code these functions to get means and variances; the numpy package that we imported at the beginning of the module has similar methods
Step9: Data Types and Data Structures
In the introduction above, you have already seen some of the important Python data structures, including integers, floating-point numbers, lists and tuples. It is worthwhile, however, to quickly introduce all of the built-in data structures relevant to everyday Python programming.
Literals
The simplest data structure are literals, which appear directly in programs, and include most simple strings and numbers
Step10: There are a handful of constants that exist in the built-in-namespace. Importantly, there are boolean values True and False
Step11: Either of these can be negated using not.
Step12: In addition, there is a None type that represents the absence of a value.
Step13: All the arithmetic operators are available in Python
Step14: Compatibility Corner
Step15: There are several Python data structures that are used to encapsulate several elements in a set or sequence.
Tuples
The first sequence data structure is the tuple, which simply an immutable, ordered sequence of elements. These elements may be of arbitrary and mixed types. The tuple is specified by a comma-separated sequence of items, enclosed by parentheses
Step16: Individual elements in a tuple can be accessed by indexing. This amounts to specifying the appropriate element index enclosed in square brackets following the tuple name
Step17: Notice that the index is zero-based, meaning that the first index is zero, rather than one (in contrast to R). So above, 5 retrieves the sixth item, not the fifth.
Two or more sequential elements can be indexed by slicing
Step18: This retrieves the third, fourth and fifth (but not the sixth!) elements -- i.e., up to, but not including, the final index. One may also slice or index starting from the end of a sequence, by using negative indices
Step19: As you can see, this returns all elements except the final two.
You can add an optional third element to the slice, which specifies a step value. For example, the following returns every other element of foo, starting with the second element of the tuple.
Step20: The elements of a tuple, as defined above, are immutable. Therefore, Python takes offense if you try to change them
Step21: The TypeError is called an exception, which in this case indicates that you have tried to perform an action on a type that does not support it. We will learn about handling exceptions further along.
Finally, the tuple() function can create a tuple from any sequence
Step22: Why does this happen? Because in Python, strings are considered a sequence of characters.
Lists
Lists complement tuples in that they are a mutable, ordered sequence of elements. To distinguish them from tuples, they are enclosed by square brackets
Step23: Elements of a list can be arbitrarily substituted by assigning new values to the associated index
Step24: Operations on lists are somewhat unusual. For example, multiplying a list by an integer does not multiply each element by that integer, as you might expect, but rather
Step25: Which is simply three copies of the list, concatenated together. This is useful for generating lists with identical elements
Step26: (incidentally, this works with tuples as well)
Step27: Since lists are mutable, they retain several methods, some of which mutate the list. For example
Step28: Some methods, however, do not change the list
Step29: Dictionaries
One of the more flexible built-in data structures is the dictionary. A dictionary maps a collection of values to a set of associated keys. These mappings are mutable, and unlike lists or tuples, are unordered. Hence, rather than using the sequence index to return elements of the collection, the corresponding key must be used. Dictionaries are specified by a comma-separated sequence of keys and values, which are separated in turn by colons. The dictionary is enclosed by curly braces.
For example
Step30: Notice that a indexes an integer, b a tuple, and foo a string (now you know what foo means). Hence, a dictionary is a sort of associative array. Some languages refer to such a structure as a hash or key-value store.
As with lists, being mutable, dictionaries have a variety of methods and functions that take dictionary arguments. For example, some dictionary functions include
Step31: Some useful dictionary methods are
Step32: When we try to index a value that does not exist, it raises a KeyError.
Step33: If we would rather not get the error, we can use the get method, which returns None if the value is not present.
Step34: Custom return values can be specified with a second argument.
Step35: It is easy to remove items from a dictionary.
Step36: Sets
If we don't require labels for our unordered collection of values, we can use a set. Sets store unique collections of values.
Step37: We can also use the set constructor.
Step38: As we would expect, we can perform set operations.
Step39: The set function is useful for returning the unique elements of a data structure. For example, recall bar | Python Code:
# Import modules you might use
import numpy as np
# Some data, in a list
my_data = [12, 5, 17, 8, 9, 11, 21]
# Function for calulating the mean of some data
def mean(data):
# Initialize sum to zero
sum_x = 0.0
# Loop over data
for x in data:
# Add to sum
sum_x += x
# Divide by number of elements in list, and return
return sum_x / len(data)
Explanation: Introduction to Python
(via xkcd)
What is Python?
Python is a modern, open source, object-oriented programming language, created by a Dutch programmer, Guido van Rossum. Officially, it is an interpreted scripting language (meaning that it is not compiled until it is run) for the C programming language; in fact, Python itself is coded in C. Frequently, it is compared to languages like Perl and Ruby. It offers the power and flexibility of lower level (i.e. compiled) languages, without the steep learning curve, and without most of the associated debugging pitfalls. The language is very clean and readable, and it is available for almost every modern computing platform.
Why use Python for scientific programming?
Python offers a number of advantages to scientists, both for experienced and novice programmers alike:
Powerful and easy to use
Python is simultaneously powerful, flexible and easy to learn and use (in general, these qualities are traded off for a given programming language). Anything that can be coded in C, FORTRAN, or Java can be done in Python, almost always in fewer lines of code, and with fewer debugging headaches. Its standard library is extremely rich, including modules for string manipulation, regular expressions, file compression, mathematics, profiling and debugging (to name only a few). Unnecessary language constructs, such as END statements and brackets are absent, making the code terse, efficient, and easy to read. Finally, Python is object-oriented, which is an important programming paradigm particularly well-suited to scientific programming, which allows data structures to be abstracted in a natural way.
Interactive
Python may be run interactively on the command line, in much the same way as Octave or S-Plus/R. Rather than compiling and running a particular program, commands may entered serially followed by the Return key. This is often useful for mathematical programming and debugging.
Extensible
Python is often referred to as a “glue” language, meaning that it is a useful in a mixed-language environment. Frequently, programmers must interact with colleagues that operate in other programming languages, or use significant quantities of legacy code that would be problematic or expensive to re-code. Python was designed to interact with other programming languages, and in many cases C or FORTRAN code can be compiled directly into Python programs (using utilities such as f2py or weave). Additionally, since Python is an interpreted language, it can sometimes be slow relative to its compiled cousins. In many cases this performance deficit is due to a short loop of code that runs thousands or millions of times. Such bottlenecks may be removed by coding a function in FORTRAN, C or Cython, and compiling it into a Python module.
Third-party modules
There is a vast body of Python modules created outside the auspices of the Python Software Foundation. These include utilities for database connectivity, mathematics, statistics, and charting/plotting. Some notables include:
NumPy: Numerical Python (NumPy) is a set of extensions that provides the ability to specify and manipulate array data structures. It provides array manipulation and computational capabilities similar to those found in Matlab or Octave.
SciPy: An open source library of scientific tools for Python, SciPy supplements the NumPy module. SciPy gathering a variety of high level science and engineering modules together as a single package. SciPy includes modules for graphics and plotting, optimization, integration, special functions, signal and image processing, genetic algorithms, ODE solvers, and others.
Matplotlib: Matplotlib is a python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms. Its syntax is very similar to Matlab.
Pandas: A module that provides high-performance, easy-to-use data structures and data analysis tools. In particular, the DataFrame class is useful for spreadsheet-like representation and mannipulation of data. Also includes high-level plotting functionality.
IPython: An enhanced Python shell, designed to increase the efficiency and usability of coding, testing and debugging Python. It includes both a Qt-based console and an interactive HTML notebook interface, both of which feature multiline editing, interactive plotting and syntax highlighting.
Free and open
Python is released on all platforms under the GNU public license, meaning that the language and its source is freely distributable. Not only does this keep costs down for scientists and universities operating under a limited budget, but it also frees programmers from licensing concerns for any software they may develop. There is little reason to buy expensive licenses for software such as Matlab or Maple, when Python can provide the same functionality for free!
Sample code: mean and standard deviation
Here is a quick example of a Python program. We will call it stats.py, because Python programs typically end with the .py suffix. This code consists of some fake data, and two functions mean and var which calculate mean and variance, respectively. Python can be internally documented by adding lines beginning with the # symbol, or with simple strings enclosed in quotation marks. Here is the code:
End of explanation
sum_x = 0
# Loop over data
for x in my_data:
# Add to sum
sum_x += x
print(sum_x)
Explanation: Notice that, rather than using parentheses or brackets to enclose units of code (such as loops or conditional statements), python simply uses indentation. This relieves the programmer from worrying about a stray bracket causing her program to crash. Also, it forces programmers to code in neat blocks, making programs easier to read. So, for the following snippet of code:
End of explanation
mean(my_data)
Explanation: The first line initializes a variable to hold the sum, and the second initiates a loop, where each element in the data list is given the name x, and is used in the code that is indented below. The first line of subsequent code that is not indented signifies the end of the loop. It takes some getting used to, but works rather well.
Now lets call the function:
End of explanation
# Function for calulating the mean of some data
def mean(data):
# Call sum, then divide by the numner of elements
return sum(data)/len(data)
# Function for calculating variance of data
def var(data):
# Get mean of data from function above
x_bar = mean(data)
# Do sum of squares in one line
sum_squares = sum([(x - x_bar)**2 for x in data])
# Divide by n-1 and return
return sum_squares/(len(data)-1)
Explanation: Our specification of mean and var are by no means the most efficient implementations. Python provides some syntax and built-in functions to make things easier, and sometimes faster:
End of explanation
x = (45, 95, 100, 47, 92, 43)
y = (65, 73, 10, 82, 6, 23)
z = (56, 33, 110, 56, 86, 88)
datasets = (x,y,z)
datasets
Explanation: In the new implementation of mean, we use the built-in function sum to reduce the function to a single line. Similarly, var employs a list comprehension syntax to make a more compact and efficient loop.
An alternative looping construct involves the map function. Suppose that we had a number of datasets, for each which we want to calculate the mean:
End of explanation
means = []
for d in datasets:
means.append(mean(d))
means
Explanation: This can be done using a classical loop:
End of explanation
list(map(mean, datasets))
Explanation: Or, more succinctly using map:
End of explanation
np.mean(datasets, axis=1)
Explanation: Similarly we did not have to code these functions to get means and variances; the numpy package that we imported at the beginning of the module has similar methods:
End of explanation
42 # Integer
0.002243 # Floating-point
5.0J # Imaginary
'foo'
"bar" # Several string types
s = Multi-line
string
Explanation: Data Types and Data Structures
In the introduction above, you have already seen some of the important Python data structures, including integers, floating-point numbers, lists and tuples. It is worthwhile, however, to quickly introduce all of the built-in data structures relevant to everyday Python programming.
Literals
The simplest data structure are literals, which appear directly in programs, and include most simple strings and numbers:
End of explanation
type(True)
Explanation: There are a handful of constants that exist in the built-in-namespace. Importantly, there are boolean values True and False
End of explanation
not False
Explanation: Either of these can be negated using not.
End of explanation
x = None
print(x)
Explanation: In addition, there is a None type that represents the absence of a value.
End of explanation
15/4
Explanation: All the arithmetic operators are available in Python:
End of explanation
(14 - 5) * 4
Explanation: Compatibility Corner: Note that when using Python 2, you would get a different answer! Dividing an integer by an integer will yield another integer. Though this is "correct", it is not intuitive, and hence was changed in Python 3.
Operator precendence can be enforced using parentheses:
End of explanation
(34,90,56) # Tuple with three elements
(15,) # Tuple with one element
(12, 'foobar') # Mixed tuple
Explanation: There are several Python data structures that are used to encapsulate several elements in a set or sequence.
Tuples
The first sequence data structure is the tuple, which simply an immutable, ordered sequence of elements. These elements may be of arbitrary and mixed types. The tuple is specified by a comma-separated sequence of items, enclosed by parentheses:
End of explanation
foo = (5,7,2,8,2,-1,0,4)
foo[0]
Explanation: Individual elements in a tuple can be accessed by indexing. This amounts to specifying the appropriate element index enclosed in square brackets following the tuple name:
End of explanation
foo[2:5]
Explanation: Notice that the index is zero-based, meaning that the first index is zero, rather than one (in contrast to R). So above, 5 retrieves the sixth item, not the fifth.
Two or more sequential elements can be indexed by slicing:
End of explanation
foo[:-2]
Explanation: This retrieves the third, fourth and fifth (but not the sixth!) elements -- i.e., up to, but not including, the final index. One may also slice or index starting from the end of a sequence, by using negative indices:
End of explanation
foo[1::2]
Explanation: As you can see, this returns all elements except the final two.
You can add an optional third element to the slice, which specifies a step value. For example, the following returns every other element of foo, starting with the second element of the tuple.
End of explanation
a = (1,2,3)
a[0] = 6
Explanation: The elements of a tuple, as defined above, are immutable. Therefore, Python takes offense if you try to change them:
End of explanation
tuple('foobar')
Explanation: The TypeError is called an exception, which in this case indicates that you have tried to perform an action on a type that does not support it. We will learn about handling exceptions further along.
Finally, the tuple() function can create a tuple from any sequence:
End of explanation
# List with five elements
[90, 43.7, 56, 1, -4]
# Tuple with one element
[100]
# Empty list
[]
Explanation: Why does this happen? Because in Python, strings are considered a sequence of characters.
Lists
Lists complement tuples in that they are a mutable, ordered sequence of elements. To distinguish them from tuples, they are enclosed by square brackets:
End of explanation
bar = [5,8,4,2,7,9,4,1]
bar[3] = -5
bar
Explanation: Elements of a list can be arbitrarily substituted by assigning new values to the associated index:
End of explanation
bar * 3
Explanation: Operations on lists are somewhat unusual. For example, multiplying a list by an integer does not multiply each element by that integer, as you might expect, but rather:
End of explanation
[0]*10
Explanation: Which is simply three copies of the list, concatenated together. This is useful for generating lists with identical elements:
End of explanation
(3,)*10
Explanation: (incidentally, this works with tuples as well)
End of explanation
bar.extend(foo) # Adds foo to the end of bar (in-place)
bar
bar.append(5) # Appends 5 to the end of bar
bar
bar.insert(0, 4) # Inserts 4 at index 0
bar
bar.remove(7) # Removes the first occurrence of 7
bar
bar.remove(100) # Oops! Doesn’t exist
bar.pop(4) # Removes and returns indexed item
bar.reverse() # Reverses bar in place
bar
bar.sort() # Sorts bar in place
bar
Explanation: Since lists are mutable, they retain several methods, some of which mutate the list. For example:
End of explanation
bar.count(7) # Counts occurrences of 7 in bar
bar.index(7) # Returns index of first 7 in bar
Explanation: Some methods, however, do not change the list:
End of explanation
my_dict = {'a':16, 'b':(4,5), 'foo':'''(noun) a term used as a universal substitute
for something real, especially when discussing technological ideas and
problems'''}
my_dict
my_dict['b']
Explanation: Dictionaries
One of the more flexible built-in data structures is the dictionary. A dictionary maps a collection of values to a set of associated keys. These mappings are mutable, and unlike lists or tuples, are unordered. Hence, rather than using the sequence index to return elements of the collection, the corresponding key must be used. Dictionaries are specified by a comma-separated sequence of keys and values, which are separated in turn by colons. The dictionary is enclosed by curly braces.
For example:
End of explanation
len(my_dict)
# Checks to see if ‘a’ is in my_dict
'a' in my_dict
Explanation: Notice that a indexes an integer, b a tuple, and foo a string (now you know what foo means). Hence, a dictionary is a sort of associative array. Some languages refer to such a structure as a hash or key-value store.
As with lists, being mutable, dictionaries have a variety of methods and functions that take dictionary arguments. For example, some dictionary functions include:
End of explanation
# Returns a copy of the dictionary
my_dict.copy()
# Returns key/value pairs as list
my_dict.items()
# Returns list of keys
my_dict.keys()
# Returns list of values
my_dict.values()
Explanation: Some useful dictionary methods are:
End of explanation
my_dict['c']
Explanation: When we try to index a value that does not exist, it raises a KeyError.
End of explanation
my_dict.get('c')
Explanation: If we would rather not get the error, we can use the get method, which returns None if the value is not present.
End of explanation
my_dict.get('c', -1)
Explanation: Custom return values can be specified with a second argument.
End of explanation
my_dict.popitem()
# Empties dictionary
my_dict.clear()
my_dict
Explanation: It is easy to remove items from a dictionary.
End of explanation
my_set = {4, 5, 5, 7, 8}
my_set
Explanation: Sets
If we don't require labels for our unordered collection of values, we can use a set. Sets store unique collections of values.
End of explanation
empty_set = set()
empty_set
empty_set.add(-5)
another_set = empty_set
another_set
Explanation: We can also use the set constructor.
End of explanation
my_set | another_set
my_set & another_set
my_set - {4}
Explanation: As we would expect, we can perform set operations.
End of explanation
bar
set(bar)
Explanation: The set function is useful for returning the unique elements of a data structure. For example, recall bar:
End of explanation |
791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step5: Encapsulation, part 2
Step6: 2. Instantiating objects
We can now instantiate a music-maker object. We do this by calling the music-maker's initializer, to which we pass counts, denominators and pitches
Step7: Finally pass in time signatures and ask our music-maker to make a staff
Step8: 3. Making musical texture with multiple instances of a single class
Because we can create multiple, variously initialized instances of the same class, it's possible to create both minimal and varied a polyphonic textures with just a single class definition. First we initialize four different makers
Step9: Let's use these four music-makers to create a duo. We can set up a score with two staves and generate the music according to a single set of time signatures
Step10: Next, we loop through four makers, appending each maker's music to our staves as we go. We'll generate music for the top and bottom staff independently
Step11: 4. Making the score
Now we can make our final score and add some formatting | Python Code:
class MusicMaker:
def __init__(
self,
counts,
denominator,
pitches,
clef,
):
self.counts = counts
self.denominator = denominator
self.pitches = pitches
self.clef = clef
def make_notes_and_rests(self, counts, denominator, time_signatures, clef):
Makes notes and rests.
durations = [_.duration for _ in time_signatures]
total_duration = sum(durations)
talea = rmakers.Talea(counts, denominator)
talea_index = 0
leaves = []
current_duration = abjad.Duration(0)
while current_duration < total_duration:
leaf_duration = talea[talea_index]
if 0 < leaf_duration:
pitch = abjad.NamedPitch("c'")
else:
pitch = None
leaf_duration = abs(leaf_duration)
if total_duration < (leaf_duration + current_duration):
leaf_duration = total_duration - current_duration
leaves_ = abjad.LeafMaker()([pitch], [leaf_duration])
leaves.extend(leaves_)
current_duration += leaf_duration
talea_index += 1
staff = abjad.Staff(leaves)
clef = abjad.Clef(clef)
abjad.attach(clef, staff[0])
return staff
def impose_time_signatures(self, staff, time_signatues):
Imposes time signatures.
selections = abjad.mutate.split(staff[:], time_signatures, cyclic=True)
for time_signature, selection in zip(time_signatures, selections):
abjad.attach(time_signature, selection[0])
measure_selections = abjad.select(staff).leaves().group_by_measure()
for time_signature, measure_selection in zip(time_signatures, measure_selections):
abjad.Meter.rewrite_meter(measure_selection, time_signature)
def pitch_notes(self, staff, pitches):
Pitches notes.
pitches = abjad.CyclicTuple(pitches)
plts = abjad.select(staff).logical_ties(pitched=True)
for i, plt in enumerate(plts):
pitch = pitches[i]
for note in plt:
note.written_pitch = pitch
def attach_indicators(self, staff):
Attaches indicators to runs.
for selection in abjad.select(staff).runs():
articulation = abjad.Articulation("accent")
abjad.attach(articulation, selection[0])
if 3 <= len(selection):
abjad.hairpin("p < f", selection)
else:
dynamic = abjad.Dynamic("ppp")
abjad.attach(dynamic, selection[0])
abjad.override(staff).dynamic_line_spanner.staff_padding = 4
def make_staff(self, time_signatures):
Makes staff.
staff = self.make_notes_and_rests(
self.counts,
self.denominator,
time_signatures,
self.clef
)
self.impose_time_signatures(staff, time_signatures)
self.pitch_notes(staff, self.pitches)
self.attach_indicators(staff)
return staff
Explanation: Encapsulation, part 2: classes
In the previous notebook we encapsulated our code in functions. Functions model programming tasks as a collection of verbs (actions): data flows into and out of a series of functions until the desired result has been achieved. Classes, on the other hand, model programming tasks as a collection of nouns (objects). Objects have data (attributes) and implement methods to modify the data they contain. In this notebook we'll encapsulate our music-generating functions in a class.
1. The class definition
The code below defines a class. An object-oriented class is like a template that tells a programming language how to construct instances of itself. ("Class instance" and "object" mean the same thing in an object-oriented context.) This means that after Python reads our music-maker class definition, we can instantiate as many music-maker objects as we want. (More on this below.) The four functions we defined in the previous notebook correspond to the four methods defined here. Functions and methods are both introduced with Python's def keyword. The primary difference between functions and methods is that functions can be defined at the top level of a module while methods are always defined within (and "bound to") a class. Classes provide an even higher level of encapsulation than functions, because classes encapsulate methods.
End of explanation
pairs = [(3, 4), (5, 16), (3, 8), (4, 4)]
time_signatures = [abjad.TimeSignature(_) for _ in pairs]
counts = [1, 2, -3, 4]
denominator = 16
string = "d' fs' a' d'' g' ef'"
pitches = abjad.CyclicTuple(string.split())
clef = "treble"
maker = MusicMaker(counts, denominator, pitches, clef)
Explanation: 2. Instantiating objects
We can now instantiate a music-maker object. We do this by calling the music-maker's initializer, to which we pass counts, denominators and pitches:
End of explanation
staff = maker.make_staff(time_signatures)
abjad.show(staff)
Explanation: Finally pass in time signatures and ask our music-maker to make a staff:
End of explanation
fast_music_maker = MusicMaker(
counts=[1, 1, 1, 1, 1, -1],
denominator=16,
pitches=[0, 1],
clef="treble"
)
slow_music_maker = MusicMaker(
counts=[3, 4, 5, -1],
denominator=4,
pitches=["b,", "bf,", "gf,"],
clef="bass",
)
stuttering_music_maker = MusicMaker(
counts=[1, 1, -7],
denominator=16,
pitches=[23],
clef="treble"
)
sparkling_music_maker = MusicMaker(
counts=[1, -5, 1, -9, 1, -5],
denominator=16,
pitches=[38, 39, 40],
clef="treble^8",
)
Explanation: 3. Making musical texture with multiple instances of a single class
Because we can create multiple, variously initialized instances of the same class, it's possible to create both minimal and varied a polyphonic textures with just a single class definition. First we initialize four different makers:
End of explanation
upper_staff = abjad.Staff()
lower_staff = abjad.Staff()
pairs = [(3, 4), (5, 16), (3, 8), (4, 4)]
time_signatures = [abjad.TimeSignature(_) for _ in pairs]
Explanation: Let's use these four music-makers to create a duo. We can set up a score with two staves and generate the music according to a single set of time signatures:
End of explanation
makers = (
fast_music_maker,
slow_music_maker,
stuttering_music_maker,
sparkling_music_maker,
)
for maker in makers:
staff = maker.make_staff(time_signatures)
selection = staff[:]
staff[:] = []
upper_staff.extend(selection)
makers = (
slow_music_maker,
slow_music_maker,
stuttering_music_maker,
fast_music_maker,
)
for maker in makers:
staff = maker.make_staff(time_signatures)
selection = staff[:]
staff[:] = []
lower_staff.extend(selection)
Explanation: Next, we loop through four makers, appending each maker's music to our staves as we go. We'll generate music for the top and bottom staff independently:
End of explanation
piano_staff = abjad.StaffGroup(
[upper_staff, lower_staff],
lilypond_type="PianoStaff",
)
abjad.override(upper_staff).dynamic_line_spanner.staff_padding = 4
abjad.override(lower_staff).dynamic_line_spanner.staff_padding = 4
score = abjad.Score([piano_staff])
bar_line = abjad.BarLine("|.")
last_leaf = abjad.select(lower_staff).leaf(-1)
abjad.attach(bar_line, last_leaf)
lilypond_file = abjad.LilyPondFile.new(score)
lilypond_file.header_block.composer = "Abjad Summer Course"
string = r"\markup \fontsize #3 \bold ENCAPSULATION"
title_markup = abjad.Markup(string, literal=True)
lilypond_file.header_block.title = title_markup
lilypond_file.header_block.subtitle = "working with classes"
abjad.show(lilypond_file)
Explanation: 4. Making the score
Now we can make our final score and add some formatting:
End of explanation |
792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the list of motifsets that are available
Step1: We have two urls for getting a motifset | Python Code:
output = requests.get(server_url + '/motifdb/list_motifsets')
motifset_list = output.json()
print(motifset_list)
Explanation: Get the list of motifsets that are available
End of explanation
url = server_url + '/motifdb/initialise_api'
client = requests.session()
token = client.get(url).json()['token']
url = server_url + '/motifdb/get_motifset/'
data = {'csrfmiddlewaretoken': token}
data['motifset_id_list'] = (motifset_list['massbank_binned_005'],motifset_list['gnps_binned_005'])
print(data['motifset_id_list'])
data['filter'] = "True"
# data['filter_threshold'] = 0.95 # Default value - not required
output = client.post(url,data = data).json()
print(len(output['motifs']),len(output['metadata']))
Explanation: We have two urls for getting a motifset:
/motifdb/get_motifset/<ID>
for just getting the motifs for one
/motifdb/get_metadata/<ID>
for just getting the metadata for one
/motifdb/get_motifset
for POST requests where you can get multiple and do the filtering (see below). For this one, you also need to obtain a valid csrf token from the server
End of explanation |
793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experimental
Step1: 1. Write Eager code that is fast and scalable
TF.Eager gives you more flexibility while coding, but at the cost of losing the benefits of TensorFlow graphs. For example, Eager does not currently support distributed training, exporting models, and a variety of memory and computation optimizations.
AutoGraph gives you the best of both worlds
Step2: ... into a TF graph-building function
Step3: You can then use the converted function as you would any regular TF op -- you can pass Tensor arguments and it will return Tensors
Step4: 2. Case study
Step5: Try replacing the continue in the above code with break -- Autograph supports that as well!
The Python code above is much more readable than the matching graph code. Autograph takes care of tediously converting every piece of Python code into the matching TensorFlow graph version for you, so that you can quickly write maintainable code, but still benefit from the optimizations and deployment benefits of graphs.
Let's try some other useful Python constructs, like print and assert. We automatically convert Python assert statements into the equivalent tf.Assert code.
Step6: You can also use print functions in-graph
Step7: Appending to lists also works, with a few modifications
Step9: And all of these functionalities, and more, can be composed into more complicated code
Step10: 3. Case study
Step11: First, we'll define a small three-layer neural network using the Keras API
Step12: Let's connect the model definition (here abbreviated as m) to a loss function, so that we can train our model.
Step13: Now the final piece of the problem specification (before loading data, and clicking everything together) is backpropagating the loss through the model, and optimizing the weights using the gradient.
Step14: These are some utility functions to download data and generate batches for training
Step15: This function specifies the main training loop. We instantiate the model (using the code above), instantiate an optimizer (here we'll use SGD with momentum, nothing too fancy), and we'll instantiate some lists to keep track of training and test loss and accuracy over time.
In the loop inside this function, we'll grab a batch of data, apply an update to the weights of our model to improve its performance, and then record its current training loss and accuracy. Every so often, we'll log some information about training as well.
Step16: Everything is ready to go, let's train the model and plot its performance!
Step20: 4. Case study
Step23: Next, we set up the RNNColobot model, which is very similar to the one we used in the main exercise.
Autograph doesn't fully support classes yet (but it will soon!), so we'll write the model using simple functions.
Step24: The train and test functions are also similar to the ones used in the Eager notebook. Since the network requires a fixed batch size, we'll train in a single shot, rather than by epoch.
Step25: Finally, we add code to run inference on a single input, which we'll read from the input.
Note the do_not_convert annotation that lets us disable conversion for certain functions and run them as a py_func instead, so you can still call them from compiled code.
Step27: Finally, we put everything together.
Note that the entire training and testing code is all compiled into a single op (tf_train_model) that you only execute once! We also still use a sess.run loop for the inference part, because that requires keyboard input. | Python Code:
# Install TensorFlow; note that Colab notebooks run remotely, on virtual
# instances provided by Google.
!pip install -U -q tf-nightly
import os
import time
import tensorflow as tf
from tensorflow.contrib import autograph
import matplotlib.pyplot as plt
import numpy as np
import six
from google.colab import widgets
Explanation: Experimental: TF AutoGraph
TensorFlow Dev Summit, 2018.
This interactive notebook demonstrates AutoGraph, an experimental source-code transformation library to automatically convert Python, TensorFlow and NumPy code to TensorFlow graphs.
Note: this is pre-alpha software! The notebook works best with Python 2, for now.
Table of Contents
Write Eager code that is fast and scalable.
Case study: complex control flow.
Case study: training MNIST with Keras.
Case study: building an RNN.
End of explanation
def g(x):
if x > 0:
x = x * x
else:
x = 0
return x
Explanation: 1. Write Eager code that is fast and scalable
TF.Eager gives you more flexibility while coding, but at the cost of losing the benefits of TensorFlow graphs. For example, Eager does not currently support distributed training, exporting models, and a variety of memory and computation optimizations.
AutoGraph gives you the best of both worlds: you can write your code in an Eager style, and we will automatically transform it into the equivalent TF graph code. The graph code can be executed eagerly (as a single op), included as part of a larger graph, or exported.
For example, AutoGraph can convert a function like this:
End of explanation
print(autograph.to_code(g))
Explanation: ... into a TF graph-building function:
End of explanation
tf_g = autograph.to_graph(g)
with tf.Graph().as_default():
g_ops = tf_g(tf.constant(9))
with tf.Session() as sess:
tf_g_result = sess.run(g_ops)
print('g(9) = %s' % g(9))
print('tf_g(9) = %s' % tf_g_result)
Explanation: You can then use the converted function as you would any regular TF op -- you can pass Tensor arguments and it will return Tensors:
End of explanation
def sum_even(numbers):
s = 0
for n in numbers:
if n % 2 > 0:
continue
s += n
return s
tf_sum_even = autograph.to_graph(sum_even)
with tf.Graph().as_default():
with tf.Session() as sess:
result = sess.run(tf_sum_even(tf.constant([10, 12, 15, 20])))
print('Sum of even numbers: %s' % result)
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(sum_even))
Explanation: 2. Case study: complex control flow
Autograph can convert a large subset of the Python language into graph-equivalent code, and we're adding new supported language features all the time. In this section, we'll give you a taste of some of the functionality in AutoGraph.
AutoGraph will automatically convert most Python control flow statements into their graph equivalent.
We support common statements like while, for, if, break, return and more. You can even nest them as much as you like. Imagine trying to write the graph version of this code by hand:
End of explanation
def f(x):
assert x != 0, 'Do not pass zero!'
return x * x
tf_f = autograph.to_graph(f)
with tf.Graph().as_default():
with tf.Session() as sess:
try:
print(sess.run(tf_f(tf.constant(0))))
except tf.errors.InvalidArgumentError as e:
print('Got error message: %s' % e.message)
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(f))
Explanation: Try replacing the continue in the above code with break -- Autograph supports that as well!
The Python code above is much more readable than the matching graph code. Autograph takes care of tediously converting every piece of Python code into the matching TensorFlow graph version for you, so that you can quickly write maintainable code, but still benefit from the optimizations and deployment benefits of graphs.
Let's try some other useful Python constructs, like print and assert. We automatically convert Python assert statements into the equivalent tf.Assert code.
End of explanation
def print_sign(n):
if n >= 0:
print(n, 'is positive!')
else:
print(n, 'is negative!')
return n
tf_print_sign = autograph.to_graph(print_sign)
with tf.Graph().as_default():
with tf.Session() as sess:
sess.run(tf_print_sign(tf.constant(1)))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(print_sign))
Explanation: You can also use print functions in-graph:
End of explanation
def f(n):
numbers = []
# We ask you to tell us about the element dtype.
autograph.set_element_type(numbers, tf.int32)
for i in range(n):
numbers.append(i)
return autograph.stack(numbers) # Stack the list so that it can be used as a Tensor
tf_f = autograph.to_graph(f)
with tf.Graph().as_default():
with tf.Session() as sess:
print(sess.run(tf_f(tf.constant(5))))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(f))
Explanation: Appending to lists also works, with a few modifications:
End of explanation
def print_primes(n):
Returns all the prime numbers less than n.
assert n > 0
primes = []
autograph.set_element_type(primes, tf.int32)
for i in range(2, n):
is_prime = True
for k in range(2, i):
if i % k == 0:
is_prime = False
break
if not is_prime:
continue
primes.append(i)
all_primes = autograph.stack(primes)
print('The prime numbers less than', n, 'are:')
print(all_primes)
return tf.no_op()
tf_print_primes = autograph.to_graph(print_primes)
with tf.Graph().as_default():
with tf.Session() as sess:
n = tf.constant(50)
sess.run(tf_print_primes(n))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(print_primes))
Explanation: And all of these functionalities, and more, can be composed into more complicated code:
End of explanation
import gzip
import shutil
from six.moves import urllib
def download(directory, filename):
filepath = os.path.join(directory, filename)
if tf.gfile.Exists(filepath):
return filepath
if not tf.gfile.Exists(directory):
tf.gfile.MakeDirs(directory)
url = 'https://storage.googleapis.com/cvdf-datasets/mnist/' + filename + '.gz'
zipped_filepath = filepath + '.gz'
print('Downloading %s to %s' % (url, zipped_filepath))
urllib.request.urlretrieve(url, zipped_filepath)
with gzip.open(zipped_filepath, 'rb') as f_in, open(filepath, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
os.remove(zipped_filepath)
return filepath
def dataset(directory, images_file, labels_file):
images_file = download(directory, images_file)
labels_file = download(directory, labels_file)
def decode_image(image):
# Normalize from [0, 255] to [0.0, 1.0]
image = tf.decode_raw(image, tf.uint8)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [784])
return image / 255.0
def decode_label(label):
label = tf.decode_raw(label, tf.uint8)
label = tf.reshape(label, [])
return tf.to_int32(label)
images = tf.data.FixedLengthRecordDataset(
images_file, 28 * 28, header_bytes=16).map(decode_image)
labels = tf.data.FixedLengthRecordDataset(
labels_file, 1, header_bytes=8).map(decode_label)
return tf.data.Dataset.zip((images, labels))
def mnist_train(directory):
return dataset(directory, 'train-images-idx3-ubyte',
'train-labels-idx1-ubyte')
def mnist_test(directory):
return dataset(directory, 't10k-images-idx3-ubyte', 't10k-labels-idx1-ubyte')
Explanation: 3. Case study: training MNIST with Keras
As we've seen, writing control flow in AutoGraph is easy. So running a training loop in graph should be easy as well!
Here, we show an example of such a training loop for a simple Keras model that trains on MNIST.
End of explanation
def mlp_model(input_shape):
model = tf.keras.Sequential((
tf.keras.layers.Dense(100, activation='relu', input_shape=input_shape),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax'),
))
model.build()
return model
Explanation: First, we'll define a small three-layer neural network using the Keras API
End of explanation
def predict(m, x, y):
y_p = m(x)
losses = tf.keras.losses.categorical_crossentropy(y, y_p)
l = tf.reduce_mean(losses)
accuracies = tf.keras.metrics.categorical_accuracy(y, y_p)
accuracy = tf.reduce_mean(accuracies)
return l, accuracy
Explanation: Let's connect the model definition (here abbreviated as m) to a loss function, so that we can train our model.
End of explanation
def fit(m, x, y, opt):
l, accuracy = predict(m, x, y)
opt.minimize(l)
return l, accuracy
Explanation: Now the final piece of the problem specification (before loading data, and clicking everything together) is backpropagating the loss through the model, and optimizing the weights using the gradient.
End of explanation
def setup_mnist_data(is_training, hp, batch_size):
if is_training:
ds = mnist_train('/tmp/autograph_mnist_data')
ds = ds.shuffle(batch_size * 10)
else:
ds = mnist_test('/tmp/autograph_mnist_data')
ds = ds.repeat()
ds = ds.batch(batch_size)
return ds
def get_next_batch(ds):
itr = ds.make_one_shot_iterator()
image, label = itr.get_next()
x = tf.to_float(tf.reshape(image, (-1, 28 * 28)))
y = tf.one_hot(tf.squeeze(label), 10)
return x, y
Explanation: These are some utility functions to download data and generate batches for training
End of explanation
def train(train_ds, test_ds, hp):
m = mlp_model((28 * 28,))
opt = tf.train.MomentumOptimizer(hp.learning_rate, 0.9)
train_losses = []
autograph.set_element_type(train_losses, tf.float32)
test_losses = []
autograph.set_element_type(test_losses, tf.float32)
train_accuracies = []
autograph.set_element_type(train_accuracies, tf.float32)
test_accuracies = []
autograph.set_element_type(test_accuracies, tf.float32)
i = 0
while i < hp.max_steps:
train_x, train_y = get_next_batch(train_ds)
test_x, test_y = get_next_batch(test_ds)
step_train_loss, step_train_accuracy = fit(m, train_x, train_y, opt)
step_test_loss, step_test_accuracy = predict(m, test_x, test_y)
if i % (hp.max_steps // 10) == 0:
print('Step', i, 'train loss:', step_train_loss, 'test loss:',
step_test_loss, 'train accuracy:', step_train_accuracy,
'test accuracy:', step_test_accuracy)
train_losses.append(step_train_loss)
test_losses.append(step_test_loss)
train_accuracies.append(step_train_accuracy)
test_accuracies.append(step_test_accuracy)
i += 1
return (autograph.stack(train_losses), autograph.stack(test_losses),
autograph.stack(train_accuracies),
autograph.stack(test_accuracies))
Explanation: This function specifies the main training loop. We instantiate the model (using the code above), instantiate an optimizer (here we'll use SGD with momentum, nothing too fancy), and we'll instantiate some lists to keep track of training and test loss and accuracy over time.
In the loop inside this function, we'll grab a batch of data, apply an update to the weights of our model to improve its performance, and then record its current training loss and accuracy. Every so often, we'll log some information about training as well.
End of explanation
def plot(train, test, label):
plt.title('MNIST model %s' % label)
plt.plot(train, label='train %s' % label)
plt.plot(test, label='test %s' % label)
plt.legend()
plt.xlabel('Training step')
plt.ylabel(label.capitalize())
plt.show()
with tf.Graph().as_default():
hp = tf.contrib.training.HParams(
learning_rate=0.05,
max_steps=tf.constant(500),
)
train_ds = setup_mnist_data(True, hp, 50)
test_ds = setup_mnist_data(False, hp, 1000)
tf_train = autograph.to_graph(train)
all_losses = tf_train(train_ds, test_ds, hp)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
(train_losses, test_losses, train_accuracies,
test_accuracies) = sess.run(all_losses)
plot(train_losses, test_losses, 'loss')
plot(train_accuracies, test_accuracies, 'accuracy')
Explanation: Everything is ready to go, let's train the model and plot its performance!
End of explanation
def parse(line):
Parses a line from the colors dataset.
Args:
line: A comma-separated string containing four items:
color_name, red, green, and blue, representing the name and
respectively the RGB value of the color, as an integer
between 0 and 255.
Returns:
A tuple of three tensors (rgb, chars, length), of shapes: (batch_size, 3),
(batch_size, max_sequence_length, 256) and respectively (batch_size).
items = tf.string_split(tf.expand_dims(line, 0), ",").values
rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0
color_name = items[0]
chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256)
length = tf.cast(tf.shape(chars)[0], dtype=tf.int64)
return rgb, chars, length
def maybe_download(filename, work_directory, source_url):
Downloads the data from source url.
if not tf.gfile.Exists(work_directory):
tf.gfile.MakeDirs(work_directory)
filepath = os.path.join(work_directory, filename)
if not tf.gfile.Exists(filepath):
temp_file_name, _ = six.moves.urllib.request.urlretrieve(source_url)
tf.gfile.Copy(temp_file_name, filepath)
with tf.gfile.GFile(filepath) as f:
size = f.size()
print('Successfully downloaded', filename, size, 'bytes.')
return filepath
def load_dataset(data_dir, url, batch_size, training=True):
Loads the colors data at path into a tf.PaddedDataset.
path = maybe_download(os.path.basename(url), data_dir, url)
dataset = tf.data.TextLineDataset(path)
dataset = dataset.skip(1)
dataset = dataset.map(parse)
dataset = dataset.cache()
dataset = dataset.repeat()
if training:
dataset = dataset.shuffle(buffer_size=3000)
dataset = dataset.padded_batch(batch_size, padded_shapes=((None,), (None, None), ()))
return dataset
train_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/extras/colorbot/data/train.csv"
test_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/extras/colorbot/data/test.csv"
data_dir = "tmp/rnn/data"
Explanation: 4. Case study: building an RNN
In this exercise we build and train a model similar to the RNNColorbot model that was used in the main Eager notebook. The model is adapted for converting and training in graph mode.
To get started, we load the colorbot dataset. The code is identical to that used in the other exercise and its details are unimportant.
End of explanation
def model_components():
lower_cell = tf.contrib.rnn.LSTMBlockCell(256)
lower_cell.build(tf.TensorShape((None, 256)))
upper_cell = tf.contrib.rnn.LSTMBlockCell(128)
upper_cell.build(tf.TensorShape((None, 256)))
relu_layer = tf.layers.Dense(3, activation=tf.nn.relu)
relu_layer.build(tf.TensorShape((None, 128)))
return lower_cell, upper_cell, relu_layer
def rnn_layer(chars, cell, batch_size, training):
A simple RNN layer.
Args:
chars: A Tensor of shape (max_sequence_length, batch_size, input_size)
cell: An object of type tf.contrib.rnn.LSTMBlockCell
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (max_sequence_length, batch_size, output_size).
hidden_outputs = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
state, output = cell.zero_state(batch_size, tf.float32)
initial_state_shape = state.shape
initial_output_shape = output.shape
n = tf.shape(chars)[0]
i = 0
while i < n:
ch = chars[i]
cell_output, (state, output) = cell.call(ch, (state, output))
hidden_outputs.append(cell_output)
i += 1
hidden_outputs = autograph.stack(hidden_outputs)
if training:
hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5)
return hidden_outputs
def model(inputs, lower_cell, upper_cell, relu_layer, batch_size, training):
RNNColorbot model.
The model consists of two RNN layers (made by lower_cell and upper_cell),
followed by a fully connected layer with ReLU activation.
Args:
inputs: A tuple (chars, length)
lower_cell: An object of type tf.contrib.rnn.LSTMBlockCell
upper_cell: An object of type tf.contrib.rnn.LSTMBlockCell
relu_layer: An object of type tf.layers.Dense
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (batch_size, 3) - the model predictions.
(chars, length) = inputs
chars_time_major = tf.transpose(chars, (1, 0, 2))
chars_time_major.set_shape((None, batch_size, 256))
hidden_outputs = rnn_layer(chars_time_major, lower_cell, batch_size, training)
final_outputs = rnn_layer(hidden_outputs, upper_cell, batch_size, training)
# Grab just the end-of-sequence from each output.
indices = tf.stack((length - 1, range(batch_size)), axis=1)
sequence_ends = tf.gather_nd(final_outputs, indices)
sequence_ends.set_shape((batch_size, 128))
return relu_layer(sequence_ends)
def loss_fn(labels, predictions):
return tf.reduce_mean((predictions - labels) ** 2)
Explanation: Next, we set up the RNNColobot model, which is very similar to the one we used in the main exercise.
Autograph doesn't fully support classes yet (but it will soon!), so we'll write the model using simple functions.
End of explanation
def train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps):
iterator = train_data.make_one_shot_iterator()
step = 0
while step < num_steps:
labels, chars, sequence_length = iterator.get_next()
predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=True)
loss = loss_fn(labels, predictions)
optimizer.minimize(loss)
if step % (num_steps // 10) == 0:
print('Step', step, 'train loss', loss)
step += 1
return step
def test(eval_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps):
total_loss = 0.0
iterator = eval_data.make_one_shot_iterator()
step = 0
while step < num_steps:
labels, chars, sequence_length = iterator.get_next()
predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=False)
total_loss += loss_fn(labels, predictions)
step += 1
print('Test loss', total_loss)
return total_loss
def train_model(train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps):
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps=tf.constant(train_steps))
test(eval_data, lower_cell, upper_cell, relu_layer, 50, num_steps=tf.constant(2))
print('Colorbot is ready to generate colors!\n\n')
# In graph mode, every op needs to be a dependent of another op.
# Here, we create a no_op that will drive the execution of all other code in
# this function. Autograph will add the necessary control dependencies.
return tf.no_op()
Explanation: The train and test functions are also similar to the ones used in the Eager notebook. Since the network requires a fixed batch size, we'll train in a single shot, rather than by epoch.
End of explanation
@autograph.do_not_convert(run_as=autograph.RunMode.PY_FUNC)
def draw_prediction(color_name, pred):
pred = pred * 255
pred = pred.astype(np.uint8)
plt.axis('off')
plt.imshow(pred)
plt.title(color_name)
plt.show()
def inference(color_name, lower_cell, upper_cell, relu_layer):
_, chars, sequence_length = parse(color_name)
chars = tf.expand_dims(chars, 0)
sequence_length = tf.expand_dims(sequence_length, 0)
pred = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, 1, training=False)
pred = tf.minimum(pred, 1.0)
pred = tf.expand_dims(pred, 0)
draw_prediction(color_name, pred)
# Create an op that will drive the entire function.
return tf.no_op()
Explanation: Finally, we add code to run inference on a single input, which we'll read from the input.
Note the do_not_convert annotation that lets us disable conversion for certain functions and run them as a py_func instead, so you can still call them from compiled code.
End of explanation
def run_input_loop(sess, inference_ops, color_name_placeholder):
Helper function that reads from input and calls the inference ops in a loop.
tb = widgets.TabBar(["RNN Colorbot"])
while True:
with tb.output_to(0):
try:
color_name = six.moves.input("Give me a color name (or press 'enter' to exit): ")
except (EOFError, KeyboardInterrupt):
break
if not color_name:
break
with tb.output_to(0):
tb.clear_tab()
sess.run(inference_ops, {color_name_placeholder: color_name})
plt.show()
with tf.Graph().as_default():
# Read the data.
batch_size = 64
train_data = load_dataset(data_dir, train_url, batch_size)
eval_data = load_dataset(data_dir, test_url, 50, training=False)
# Create the model components.
lower_cell, upper_cell, relu_layer = model_components()
# Create the helper placeholder for inference.
color_name_placeholder = tf.placeholder(tf.string, shape=())
# Compile the train / test code.
tf_train_model = autograph.to_graph(train_model)
train_model_ops = tf_train_model(
train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps=100)
# Compile the inference code.
tf_inference = autograph.to_graph(inference)
inference_ops = tf_inference(color_name_placeholder, lower_cell, upper_cell, relu_layer)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Run training and testing.
sess.run(train_model_ops)
# Run the inference loop.
run_input_loop(sess, inference_ops, color_name_placeholder)
Explanation: Finally, we put everything together.
Note that the entire training and testing code is all compiled into a single op (tf_train_model) that you only execute once! We also still use a sess.run loop for the inference part, because that requires keyboard input.
End of explanation |
794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is to get myself to be familair with the pybedtools
import the pybedtools module
Step1: get the working directory and you can change to the directory you want by os.chdir(path)
list all the files in the directory os.listdir(path)
Step2: a is a bedtool object, one can acess the object by index
Step3: see what type of the interval is by Interval.file_type
All features, no matter what the file type, have chrom, start, stop, name, score, and strand attributes. Note that start and stop are long integers, while everything else (including score) is a string.
Step4: interval can also be accessed by index or like a dictionary
Step5: slice get an itertools object that can be interated
Step6: for each interval, one can access the chr,star, end by
Step7: Let's do some intersection for 2 bed files
Step8: use the Bedtools.intersect() method
Step9: one can add flags to the intersect call just as the command line intersectbed
-wa Write the original entry in A for each overlap. may have duplicated entries from A
-u Write original A entry once if any overlaps found in B. In other words, just report the fact at least one overlap was found in B.
The following toy example returns the same result for -u and -wa flag.
a.intersect(b, wa=True).head()
Step10: saving files
save the Bedtool object to a file, you can add a trackline.
Step11: one can chain the methods of pybedtools just like the pipe in the command line.
The following intersect a with b first, and save the intersection in a file.
because the intersect() method returns a Bedtool object, it can be chained using .merge()
method and finally saved the mearged bed file
Step12: demonstrate the filter method
grep out only the intervals with length bigger than 100
Step13: Let's use filter to extract the intervals with length >100
Step14: Or, use a more generic function
Step15: Then call this function inside the filter method
Step16: we got the same results as using the lambda. Using len_filter function is more flexiabl, as you can supply any length that you want to filter.
demonstrate the each() method
each() method can apply a function for each interval in the BedTool object.
It is similar to the apply functions in R
Let's add counts of how many hits in b intersect a
Step17: Normalize the counts by dividing the length of the interval.use a scalar of 0.001 to normalize it to
counts per 1kb | Python Code:
import pybedtools
import sys
import os
Explanation: This notebook is to get myself to be familair with the pybedtools
import the pybedtools module
End of explanation
os.getcwd()
# use a pre-shipped bed file as an example
a = pybedtools.example_bedtool('a.bed')
Explanation: get the working directory and you can change to the directory you want by os.chdir(path)
list all the files in the directory os.listdir(path)
End of explanation
print a[0]
print a[1]
feature = a[1]
Explanation: a is a bedtool object, one can acess the object by index
End of explanation
print feature.file_type
print feature
print feature.chrom
print feature.start
print feature.stop
print feature.name
print feature.score
print feature.strand
print feature.fields
Explanation: see what type of the interval is by Interval.file_type
All features, no matter what the file type, have chrom, start, stop, name, score, and strand attributes. Note that start and stop are long integers, while everything else (including score) is a string.
End of explanation
print feature[0]
print feature["chrom"]
print feature[1]
print feature["start"]
print a[1:4]
Explanation: interval can also be accessed by index or like a dictionary
End of explanation
for interval in a[1:4]:
print interval
Explanation: slice get an itertools object that can be interated
End of explanation
for interval in a[1:4]:
print interval.chrom
Explanation: for each interval, one can access the chr,star, end by
End of explanation
a = pybedtools.example_bedtool('a.bed')
b = pybedtools.example_bedtool('b.bed')
print a.head() # print out only the first 10 lines if you have big bed file
print b.head()
Explanation: Let's do some intersection for 2 bed files
End of explanation
a_and_b = a.intersect(b)
a_and_b.head()
Explanation: use the Bedtools.intersect() method
End of explanation
a.intersect(b, u=True).head()
Explanation: one can add flags to the intersect call just as the command line intersectbed
-wa Write the original entry in A for each overlap. may have duplicated entries from A
-u Write original A entry once if any overlaps found in B. In other words, just report the fact at least one overlap was found in B.
The following toy example returns the same result for -u and -wa flag.
a.intersect(b, wa=True).head()
End of explanation
c = a_and_b.saveas('intersection-of-a-and-b.bed', trackline='track name="a and b"')
os.listdir(".")
print c
Explanation: saving files
save the Bedtool object to a file, you can add a trackline.
End of explanation
x4 = a\
.intersect(b, u=True)\
.saveas('a-with-b.bed')\
.merge()\
.saveas('a-with-b-merged.bed')
Explanation: one can chain the methods of pybedtools just like the pipe in the command line.
The following intersect a with b first, and save the intersection in a file.
because the intersect() method returns a Bedtool object, it can be chained using .merge()
method and finally saved the mearged bed file
End of explanation
print a
for interval in a:
print len(interval)
Explanation: demonstrate the filter method
grep out only the intervals with length bigger than 100
End of explanation
print a.filter(lambda x: len(x) >100)
Explanation: Let's use filter to extract the intervals with length >100
End of explanation
def len_filter(feature, L):
return len(feature) > L
Explanation: Or, use a more generic function
End of explanation
print a.filter(len_filter, 100)
Explanation: Then call this function inside the filter method:
End of explanation
with_count = a.intersect(b, c=True)
print with_count
Explanation: we got the same results as using the lambda. Using len_filter function is more flexiabl, as you can supply any length that you want to filter.
demonstrate the each() method
each() method can apply a function for each interval in the BedTool object.
It is similar to the apply functions in R
Let's add counts of how many hits in b intersect a
End of explanation
def normalize_count(feature, scalar=0.001):
count = float(feature[-1])
normalized_count = count/len(feature) * scalar
## write the score back, need to turn it to a string first
feature.score = str(normalized_count)
return feature
print with_count.each(normalize_count)
Explanation: Normalize the counts by dividing the length of the interval.use a scalar of 0.001 to normalize it to
counts per 1kb
End of explanation |
795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Bonus1
Step4: Bonus2
Step6: Bonus3
Step8: We may use lambda when no key is provided like so
Step10: Unit Tests | Python Code:
multimax([])
def multimax(iterable):
Return a list of all maximum values
try:
max_item = max(iterable)
except ValueError:
return []
return [
item
for item in iterable
if item == max_item
]
multimax([])
def multimax(iterable):
Return a list of all maximum values
max_item = max(iterable, default=None) # Using the default keyword-only argument of max prevents exception.
return [
item
for item in iterable
if item == max_item
]
multimax([])
Explanation: Bonus1: multimax function returns an empty list if the given iterable is empty
End of explanation
numbers = [1, 3, 8, 5, 4, 10, 6]
odds = (n for n in numbers if n % 2 == 1)
multimax(odds)
def multimax(iterable):
Return a list of all maximum values
maximums = []
for item in iterable:
if not maximums or maximums[0] == item:
maximums.append(item)
else:
if item > maximums[0]:
maximums = [item]
return maximums
multimax([])
multimax([1, 4, 2, 4, 3])
numbers = [1, 3, 8, 5, 4, 10, 6]
odds = (n for n in numbers if n % 2 == 1)
multimax(odds)
Explanation: Bonus2: multimax function will work with iterators (lazy iterables) such as files, zip objects, and generators
End of explanation
def multimax(iterable, key=None):
Return a list of all maximum values
if key is None:
def key(item): return item
maximums = []
key_max = None
for item in iterable:
k = key(item)
if k == key_max:
maximums.append(item)
elif not maximums or k > key_max:
key_max = k
maximums = [item]
return maximums
multimax([1, 2, 4, 3])
multimax([1, 4, 2, 4, 3])
numbers = [1, 3, 8, 5, 4, 10, 6]
odds = (n for n in numbers if n % 2 == 1)
multimax(odds)
multimax([])
words = ["cheese", "shop", "ministry", "of", "silly", "walks", "argument", "clinic"]
multimax(words, key=len)
Explanation: Bonus3: multimax function accept a keyword argument called "key" that is a function which will be used to determine the key by which to compare values as maximums
End of explanation
def multimax(iterable, key=lambda x: x):
Return a list of all maximum values
maximums = []
key_max = None
for item in iterable:
k = key(item)
if k == key_max:
maximums.append(item)
elif not maximums or k > key_max:
key_max = k
maximums = [item]
return maximums
Explanation: We may use lambda when no key is provided like so:
End of explanation
import unittest
class MultiMaxTests(unittest.TestCase):
Tests for multimax.
def test_single_max(self):
self.assertEqual(multimax([1, 2, 4, 3]), [4])
def test_two_max(self):
self.assertEqual(multimax([1, 4, 2, 4, 3]), [4, 4])
def test_all_max(self):
self.assertEqual(multimax([1, 1, 1, 1, 1]), [1, 1, 1, 1, 1])
def test_lists(self):
inputs = [[0], [1], [], [0, 1], [1]]
expected = [[1], [1]]
self.assertEqual(multimax(inputs), expected)
def test_order_maintained(self):
inputs = [
(3, 2),
(2, 1),
(3, 2),
(2, 0),
(3, 2),
]
expected = [
inputs[0],
inputs[2],
inputs[4],
]
outputs = multimax(inputs)
self.assertEqual(outputs, expected)
self.assertIs(outputs[0], expected[0])
self.assertIs(outputs[1], expected[1])
self.assertIs(outputs[2], expected[2])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_empty(self):
self.assertEqual(multimax([]), [])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_iterator(self):
numbers = [1, 4, 2, 4, 3]
squares = (n**2 for n in numbers)
self.assertEqual(multimax(squares), [16, 16])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_key_function(self):
words = ["alligator", "animal", "apple", "artichoke", "avalanche"]
outputs = ["alligator", "artichoke", "avalanche"]
self.assertEqual(multimax(words, key=len), outputs)
if __name__ == "__main__":
unittest.main(argv=['first-arg-is-ignored'], exit=False)
Explanation: Unit Tests
End of explanation |
796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Twitter Sentiment analysis with Watson Tone Analyzer and Watson Personality Insights
<img style="max-width
Step1: Install latest pixiedust
Make sure you are running the latest pixiedust version. After upgrading restart the kernel before continuing to the next cells.
Step2: Install the streaming Twitter jar in the notebook from the Github repo
This jar file contains the Spark Streaming application (written in Scala) that connects to Twitter to fetch the tweets and send them to Watson Tone Analyzer for analysis. The resulting scores are then added to the tweets dataframe as separate columns.
Step3: <h3>If PixieDust or the streaming Twitter jar were just installed or upgraded, <span style="color
Step4: Create a tweets dataframe from the data fetched above and transfer it to Python
Notice the __ prefix for each variable which is used to signal PixieDust that the variable needs to be transfered back to Python
Step5: Group the tweets by author and userid
This will be used later to fetch the last 200 tweets for each author
Step6: Set up the Twitter API from python-twitter module
Step7: For each author, fetch the last 200 tweets
use flatMap to return a new RDD that contains a list of tuples composed of userid and tweets text
Step8: Concatenate all the tweets for each user so we have enough words to send to Watson Personality Insights
Use map to create an RDD of key, value pair composed of userId and tweets
Use reduceByKey to group all record with same author and concatenate the tweets
Step9: Call Watson Personality Insights on the text for each author
Watson Personality Insights requires at least 100 words from its lexicon to be available, which may not exist for each user. This is why the getPersonlityInsight helper function guards against exceptions from calling Watson PI. If an exception occurs, then an empty array is returned. Each record with empty array is filtered out of the resulting RDD.
Note also that we use broadcast variables to propagate the userName and password to the cluster
Step10: Convert the RDD back to a DataFrame and call PixieDust display to visualize the results
The schema is automatically created from introspecting a sample payload result from Watson Personality Insights
Step11: Compare Twitter users Personality Insights scores with this year presidential candidates
For a quick look on the difference in Personality Insights scores Spark provides a describe() function that computes stddev and mean values off the dataframe. Compare differences in the scores of twitter users and presidential candidates.
Step12: Calculate Euclidean distance (norm) between each Twitter user and the presidential candidates using the Personality Insights scores
Add the distances into 2 extra columns and display the results
Step13: Optional
Step14: Compute the sentiment distributions for tweets with scores greater than 60% and create matplotlib chart visualization
Step15: Compute the top hashtags used in each tweet
Step16: Compute the aggregate sentiment distribution for all the tweets that contain the top hashtags
Step17: Optional
Step18: The embedded app has generated a DataFrame called __tweets. Let's use it to do some data science | Python Code:
!pip install --user python-twitter
!pip install --user watson-developer-cloud
Explanation: Twitter Sentiment analysis with Watson Tone Analyzer and Watson Personality Insights
<img style="max-width: 800px; padding: 25px 0px;" src="https://ibm-watson-data-lab.github.io/spark.samples/Twitter%20Sentiment%20with%20Watson%20TA%20and%20PI%20architecture%20diagram.png"/>
In this notebook, we perform the following steps:
1. Install python-twitter and watson-developer-cloud modules
2. Install the streaming Twitter jar using PixieDust packageManager
3. Invoke the streaming Twitter app using the PixieDust Scala Bridge to get a DataFrame containing all the tweets enriched with Watson Tone Analyzer scores
4. Create a new RDD that groups the tweets by author and concatenates all the associated tweets into one blob
5. For each author and aggregated text, invoke the Watson Personality Insights to get the scores
6. Visualize results using PixieDust display
Learn more
Watson Tone Analyzer
Watson Personality Insights
python-twitter
watson-developer-cloud
PixieDust
Realtime Sentiment Analysis of Twitter Hashtags with Spark
Install python-twitter and watson-developer-cloud
If you haven't already installed the following modules, run these 2 cells:
End of explanation
!pip install --upgrade --user pixiedust
Explanation: Install latest pixiedust
Make sure you are running the latest pixiedust version. After upgrading restart the kernel before continuing to the next cells.
End of explanation
import pixiedust
jarPath = "https://github.com/ibm-watson-data-lab/spark.samples/raw/master/dist/streaming-twitter-assembly-1.6.jar"
pixiedust.installPackage(jarPath)
print("done")
Explanation: Install the streaming Twitter jar in the notebook from the Github repo
This jar file contains the Spark Streaming application (written in Scala) that connects to Twitter to fetch the tweets and send them to Watson Tone Analyzer for analysis. The resulting scores are then added to the tweets dataframe as separate columns.
End of explanation
import pixiedust
sqlContext=SQLContext(sc)
#Set up the twitter credentials, they will be used both in scala and python cells below
consumerKey = "XXXX"
consumerSecret = "XXXX"
accessToken = "XXXX"
accessTokenSecret = "XXXX"
#Set up the Watson Personality insight credentials
piUserName = "XXXX"
piPassword = "XXXX"
#Set up the Watson Tone Analyzer credentials
taUserName = "XXXX"
taPassword = "XXXX"
%%scala
val demo = com.ibm.cds.spark.samples.StreamingTwitter
demo.setConfig("twitter4j.oauth.consumerKey",consumerKey)
demo.setConfig("twitter4j.oauth.consumerSecret",consumerSecret)
demo.setConfig("twitter4j.oauth.accessToken",accessToken)
demo.setConfig("twitter4j.oauth.accessTokenSecret",accessTokenSecret)
demo.setConfig("watson.tone.url","https://gateway.watsonplatform.net/tone-analyzer/api")
demo.setConfig("watson.tone.password",taPassword)
demo.setConfig("watson.tone.username",taUserName)
import org.apache.spark.streaming._
demo.startTwitterStreaming(sc, Seconds(30)) //Run the application for a limited time
Explanation: <h3>If PixieDust or the streaming Twitter jar were just installed or upgraded, <span style="color: red">restart the kernel</span> before continuing.</h3>
Use Scala Bridge to run the command line version of the app
Insert your credentials for Twitter, Watson Tone Analyzer, and Watson Personality Insights. Then run the following cell.
Read how to provision these services and get credentials.
End of explanation
%%scala
val demo = com.ibm.cds.spark.samples.StreamingTwitter
val (__sqlContext, __df) = demo.createTwitterDataFrames(sc)
Explanation: Create a tweets dataframe from the data fetched above and transfer it to Python
Notice the __ prefix for each variable which is used to signal PixieDust that the variable needs to be transfered back to Python
End of explanation
import pyspark.sql.functions as F
usersDF = __df.groupby("author", "userid").agg(F.avg("Anger").alias("Anger"), F.avg("Disgust").alias("Disgust"))
usersDF.show()
Explanation: Group the tweets by author and userid
This will be used later to fetch the last 200 tweets for each author
End of explanation
import twitter
api = twitter.Api(consumer_key=consumerKey,
consumer_secret=consumerSecret,
access_token_key=accessToken,
access_token_secret=accessTokenSecret)
#print(api.VerifyCredentials())
Explanation: Set up the Twitter API from python-twitter module
End of explanation
def getTweets(screenName):
statuses = api.GetUserTimeline(screen_name=screenName,
since_id=None,
max_id=None,
count=200,
include_rts=False,
trim_user=False,
exclude_replies=True)
return statuses
usersWithTweetsRDD = usersDF.flatMap(lambda s: [(s.user.screen_name, s.text.encode('ascii', 'ignore')) for s in getTweets(s['userid'])])
print(usersWithTweetsRDD.count())
Explanation: For each author, fetch the last 200 tweets
use flatMap to return a new RDD that contains a list of tuples composed of userid and tweets text: (userid, tweetText)
End of explanation
import re
usersWithTweetsRDD2 = usersWithTweetsRDD.map(lambda s: (s[0], s[1])).reduceByKey(lambda s,t: s + '\n' + t)\
.filter(lambda s: len(re.findall(r'\w+', s[1])) > 100 )
print(usersWithTweetsRDD2.count())
#usersWithTweetsRDD2.take(2)
Explanation: Concatenate all the tweets for each user so we have enough words to send to Watson Personality Insights
Use map to create an RDD of key, value pair composed of userId and tweets
Use reduceByKey to group all record with same author and concatenate the tweets
End of explanation
from pyspark.sql.types import *
from watson_developer_cloud import PersonalityInsightsV3
broadCastPIUsername = sc.broadcast(piUserName)
broadCastPIPassword = sc.broadcast(piPassword)
def getPersonalityInsight(text, schema=False):
personality_insights = PersonalityInsightsV3(
version='2016-10-20',
username=broadCastPIUsername.value,
password=broadCastPIPassword.value)
try:
p = personality_insights.profile(
text, content_type='text/plain',
raw_scores=True, consumption_preferences=True)
if schema:
return \
[StructField(t['name'], FloatType()) for t in p["needs"]] + \
[StructField(t['name'], FloatType()) for t in p["values"]] + \
[StructField(t['name'], FloatType()) for t in p['personality' ]]
else:
return \
[t['raw_score'] for t in p["needs"]] + \
[t['raw_score'] for t in p["values"]] + \
[t['raw_score'] for t in p['personality']]
except:
return []
usersWithPIRDD = usersWithTweetsRDD2.map(lambda s: [s[0]] + getPersonalityInsight(s[1])).filter(lambda s: len(s)>1)
print(usersWithPIRDD.count())
#usersWithPIRDD.take(2)
Explanation: Call Watson Personality Insights on the text for each author
Watson Personality Insights requires at least 100 words from its lexicon to be available, which may not exist for each user. This is why the getPersonlityInsight helper function guards against exceptions from calling Watson PI. If an exception occurs, then an empty array is returned. Each record with empty array is filtered out of the resulting RDD.
Note also that we use broadcast variables to propagate the userName and password to the cluster
End of explanation
#convert to dataframe
schema = StructType(
[StructField('userid',StringType())] + getPersonalityInsight(usersWithTweetsRDD2.take(1)[0][1], schema=True)
)
usersWithPIDF = sqlContext.createDataFrame(
usersWithPIRDD, schema
)
usersWithPIDF.cache()
display(usersWithPIDF)
Explanation: Convert the RDD back to a DataFrame and call PixieDust display to visualize the results
The schema is automatically created from introspecting a sample payload result from Watson Personality Insights
End of explanation
candidates = "realDonaldTrump HillaryClinton".split(" ")
candidatesRDD = sc.parallelize(candidates)\
.flatMap(lambda s: [(t.user.screen_name, t.text.encode('ascii', 'ignore')) for t in getTweets(s)])\
.map(lambda s: (s[0], s[1]))\
.reduceByKey(lambda s,t: s + '\n' + t)\
.filter(lambda s: len(re.findall(r'\w+', s[1])) > 100 )\
.map(lambda s: [s[0]] + getPersonalityInsight(s[1]))
candidatesPIDF = sqlContext.createDataFrame(
candidatesRDD, schema
)
c = candidatesPIDF.collect()
broadCastTrumpPI = sc.broadcast(c[0][1:])
broadCastHillaryPI = sc.broadcast(c[1][1:])
display(candidatesPIDF)
candidatesPIDF.select('userid','Emotional range','Agreeableness', 'Extraversion','Conscientiousness', 'Openness').show()
usersWithPIDF.describe(['Emotional range']).show()
usersWithPIDF.describe(['Agreeableness']).show()
usersWithPIDF.describe(['Extraversion']).show()
usersWithPIDF.describe(['Conscientiousness']).show()
usersWithPIDF.describe(['Openness']).show()
Explanation: Compare Twitter users Personality Insights scores with this year presidential candidates
For a quick look on the difference in Personality Insights scores Spark provides a describe() function that computes stddev and mean values off the dataframe. Compare differences in the scores of twitter users and presidential candidates.
End of explanation
import numpy as np
from pyspark.sql.types import Row
def addEuclideanDistance(s):
dict = s.asDict()
def getEuclideanDistance(a,b):
return np.linalg.norm(np.array(a) - np.array(b)).item()
dict["distDonaldTrump"]=getEuclideanDistance(s[1:], broadCastTrumpPI.value)
dict["distHillary"]=getEuclideanDistance(s[1:], broadCastHillaryPI.value)
dict["closerHillary"] = "Yes" if dict["distHillary"] < dict["distDonaldTrump"] else "No"
return Row(**dict)
#add euclidean distances to Trump and Hillary
euclideanDF = sqlContext.createDataFrame(usersWithPIDF.map(lambda s: addEuclideanDistance(s)))
#Reorder columns to have userid and distances first
cols = euclideanDF.columns
reorderCols = ["userid","distHillary","distDonaldTrump", "closerHillary"]
euclideanDF = euclideanDF.select(reorderCols + [x for x in cols if x not in reorderCols])
#PixieDust display.
#To visualize the distribution, select the bar chart display, use closerHillary as key and value and aggregation=count
display(euclideanDF)
Explanation: Calculate Euclidean distance (norm) between each Twitter user and the presidential candidates using the Personality Insights scores
Add the distances into 2 extra columns and display the results
End of explanation
tweets=__df
tweets.count()
display(tweets)
Explanation: Optional: do some extra data science on the tweets
End of explanation
#create an array that will hold the count for each sentiment
sentimentDistribution=[0] * 13
#For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60%
#Store the data in the array
for i, sentiment in enumerate(tweets.columns[-13:]):
sentimentDistribution[i]=__sqlContext.sql("SELECT count(*) as sentCount FROM tweets where " + sentiment + " > 60")\
.collect()[0].sentCount
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
ind=np.arange(13)
width = 0.35
bar = plt.bar(ind, sentimentDistribution, width, color='g', label = "distributions")
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )
plt.ylabel('Tweet count')
plt.xlabel('Tone')
plt.title('Distribution of tweets by sentiments > 60%')
plt.xticks(ind+width, tweets.columns[-13:])
plt.legend()
plt.show()
Explanation: Compute the sentiment distributions for tweets with scores greater than 60% and create matplotlib chart visualization
End of explanation
from operator import add
import re
tagsRDD = tweets.flatMap( lambda t: re.split("\s", t.text))\
.filter( lambda word: word.startswith("#") )\
.map( lambda word : (word, 1 ))\
.reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a))
top10tags = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2, plSize[1]*2) )
labels = [i[0] for i in top10tags]
sizes = [int(i[1]) for i in top10tags]
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', "beige", "paleturquoise", "pink", "lightyellow", "coral"]
plt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90)
plt.axis('equal')
plt.show()
Explanation: Compute the top hashtags used in each tweet
End of explanation
cols = tweets.columns[-13:]
def expand( t ):
ret = []
for s in [i[0] for i in top10tags]:
if ( s in t.text ):
for tone in cols:
ret += [s.replace(':','').replace('-','') + u"-" + unicode(tone) + ":" + unicode(getattr(t, tone))]
return ret
def makeList(l):
return l if isinstance(l, list) else [l]
#Create RDD from tweets dataframe
tagsRDD = tweets.map(lambda t: t )
#Filter to only keep the entries that are in top10tags
tagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) )
#Create a flatMap using the expand function defined above, this will be used to collect all the scores
#for a particular tag with the following format: Tag-Tone-ToneScore
tagsRDD = tagsRDD.flatMap( expand )
#Create a map indexed by Tag-Tone keys
tagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(":")[0], float( fullTag.split(":")[1]) ))
#Call combineByKey to format the data as follow
#Key=Tag-Tone
#Value=(count, sum_of_all_score_for_this_tone)
tagsRDD = tagsRDD.combineByKey((lambda x: (x,1)),
(lambda x, y: (x[0] + y, x[1] + 1)),
(lambda x, y: (x[0] + y[0], x[1] + y[1])))
#ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple
#Key=Tag
#Value=(Tone, average_score)
tagsRDD = tagsRDD.map(lambda (key, ab): (key.split("-")[0], (key.split("-")[1], round(ab[0]/ab[1], 2))))
#Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples
tagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) )
#Sort the (Tone,average_score) tuples alphabetically by Tone
tagsRDD = tagsRDD.mapValues( lambda x : sorted(x) )
#Format the data as expected by the plotting code in the next cell.
#map the Values to a tuple as follow: ([list of tone], [list of average score])
#e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0])
tagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) )
#Use custom sort function to sort the entries by order of appearance in top10tags
def customCompare( key ):
for (k,v) in top10tags:
if k == key:
return v
return 0
tagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare)
#Take the mean tone scores for the top 10 tags
top10tagsMeanScores = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*3, plSize[1]*2) )
top5tagsMeanScores = top10tagsMeanScores[:5]
width = 0
ind=np.arange(13)
(a,b) = top5tagsMeanScores[0]
labels=b[0]
colors = ["beige", "paleturquoise", "pink", "lightyellow", "coral", "lightgreen", "gainsboro", "aquamarine","c"]
idx=0
for key, value in top5tagsMeanScores:
plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key)
width += 0.15
idx += 1
plt.xticks(ind+0.3, labels)
plt.ylabel('AVERAGE SCORE')
plt.xlabel('TONES')
plt.title('Breakdown of top hashtags by sentiment tones')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode="expand", borderaxespad=0.)
plt.show()
Explanation: Compute the aggregate sentiment distribution for all the tweets that contain the top hashtags
End of explanation
%%scala
val demo = com.ibm.cds.spark.samples.PixiedustStreamingTwitter
demo.setConfig("twitter4j.oauth.consumerKey",consumerKey)
demo.setConfig("twitter4j.oauth.consumerSecret",consumerSecret)
demo.setConfig("twitter4j.oauth.accessToken",accessToken)
demo.setConfig("twitter4j.oauth.accessTokenSecret",accessTokenSecret)
demo.setConfig("watson.tone.url","https://gateway.watsonplatform.net/tone-analyzer/api")
demo.setConfig("watson.tone.password",taPassword)
demo.setConfig("watson.tone.username",taUserName)
demo.setConfig("checkpointDir", System.getProperty("user.home") + "/pixiedust/ssc")
!pip install --upgrade --user pixiedust-twitterdemo
from pixiedust_twitterdemo import *
twitterDemo()
Explanation: Optional: Use Twitter demo embedded app to run the same app with a UI
End of explanation
display(__tweets)
from pyspark.sql import Row
from pyspark.sql.types import *
emotions=__tweets.columns[-13:]
distrib = __tweets.flatMap(lambda t: [(x,t[x]) for x in emotions]).filter(lambda t: t[1]>60)\
.toDF(StructType([StructField('emotion',StringType()),StructField('score',DoubleType())]))
display(distrib)
__tweets.registerTempTable("pixiedust_tweets")
#create an array that will hold the count for each sentiment
sentimentDistribution=[0] * 13
#For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60%
#Store the data in the array
for i, sentiment in enumerate(__tweets.columns[-13:]):
sentimentDistribution[i]=sqlContext.sql("SELECT count(*) as sentCount FROM pixiedust_tweets where " + sentiment + " > 60")\
.collect()[0].sentCount
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
ind=np.arange(13)
width = 0.35
bar = plt.bar(ind, sentimentDistribution, width, color='g', label = "distributions")
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )
plt.ylabel('Tweet count')
plt.xlabel('Tone')
plt.title('Distribution of tweets by sentiments > 60%')
plt.xticks(ind+width, __tweets.columns[-13:])
plt.legend()
plt.show()
from operator import add
import re
tagsRDD = __tweets.flatMap( lambda t: re.split("\s", t.text))\
.filter( lambda word: word.startswith("#") )\
.map( lambda word : (word, 1 ))\
.reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a))
top10tags = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2, plSize[1]*2) )
labels = [i[0] for i in top10tags]
sizes = [int(i[1]) for i in top10tags]
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', "beige", "paleturquoise", "pink", "lightyellow", "coral"]
plt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90)
plt.axis('equal')
plt.show()
cols = __tweets.columns[-13:]
def expand( t ):
ret = []
for s in [i[0] for i in top10tags]:
if ( s in t.text ):
for tone in cols:
ret += [s.replace(':','').replace('-','') + u"-" + unicode(tone) + ":" + unicode(getattr(t, tone))]
return ret
def makeList(l):
return l if isinstance(l, list) else [l]
#Create RDD from tweets dataframe
tagsRDD = __tweets.map(lambda t: t )
#Filter to only keep the entries that are in top10tags
tagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) )
#Create a flatMap using the expand function defined above, this will be used to collect all the scores
#for a particular tag with the following format: Tag-Tone-ToneScore
tagsRDD = tagsRDD.flatMap( expand )
#Create a map indexed by Tag-Tone keys
tagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(":")[0], float( fullTag.split(":")[1]) ))
#Call combineByKey to format the data as follow
#Key=Tag-Tone
#Value=(count, sum_of_all_score_for_this_tone)
tagsRDD = tagsRDD.combineByKey((lambda x: (x,1)),
(lambda x, y: (x[0] + y, x[1] + 1)),
(lambda x, y: (x[0] + y[0], x[1] + y[1])))
#ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple
#Key=Tag
#Value=(Tone, average_score)
tagsRDD = tagsRDD.map(lambda (key, ab): (key.split("-")[0], (key.split("-")[1], round(ab[0]/ab[1], 2))))
#Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples
tagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) )
#Sort the (Tone,average_score) tuples alphabetically by Tone
tagsRDD = tagsRDD.mapValues( lambda x : sorted(x) )
#Format the data as expected by the plotting code in the next cell.
#map the Values to a tuple as follow: ([list of tone], [list of average score])
#e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0])
tagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) )
#Use custom sort function to sort the entries by order of appearance in top10tags
def customCompare( key ):
for (k,v) in top10tags:
if k == key:
return v
return 0
tagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare)
#Take the mean tone scores for the top 10 tags
top10tagsMeanScores = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*3, plSize[1]*2) )
top5tagsMeanScores = top10tagsMeanScores[:5]
width = 0
ind=np.arange(13)
(a,b) = top5tagsMeanScores[0]
labels=b[0]
colors = ["beige", "paleturquoise", "pink", "lightyellow", "coral", "lightgreen", "gainsboro", "aquamarine","c"]
idx=0
for key, value in top5tagsMeanScores:
plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key)
width += 0.15
idx += 1
plt.xticks(ind+0.3, labels)
plt.ylabel('AVERAGE SCORE')
plt.xlabel('TONES')
plt.title('Breakdown of top hashtags by sentiment tones')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode="expand", borderaxespad=0.)
plt.show()
Explanation: The embedded app has generated a DataFrame called __tweets. Let's use it to do some data science
End of explanation |
797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Database
This notebook demonstrates the persistent behavior of the database.
Initialization
Clear the file system for demonstration purposes.
Step1: Load the database server.
Step2: Load the database webserver.
Step3: Import the web interface and initialize it.
Step4: Generate Data
Let's create some dummy data to aid in our demonstration. You will need to import the timeseries package to work with the TimeSeries format.
Note
Step5: Insert Data
Let's start by loading the data into the database, using the REST API web interface.
Step6: Inspect Data
Let's inspect the data, to make sure that all the previous operations were successful.
Step7: Let's generate an additional time series for similarity searches. We'll store the time series and the results of the similarity searches, so that we can compare against them after reloading the database.
Step8: Finally, let's store our iSAX tree representation.
Step9: Terminate and Reload Database
Now that we know that everything is loaded, let's close the database and re-open it.
Step10: Inspect Data
Let's repeat the previous tests to check whether our persistence architecture worked.
Step11: We have successfully reloaded all of the database components from disk! | Python Code:
# database parameters
ts_length = 100
data_dir = '../db_files'
db_name = 'default'
dir_path = data_dir + '/' + db_name + '/'
# clear file system for testing
if not os.path.exists(dir_path):
os.makedirs(dir_path)
filelist = [dir_path + f for f in os.listdir(dir_path)]
for f in filelist:
os.remove(f)
Explanation: Time Series Database
This notebook demonstrates the persistent behavior of the database.
Initialization
Clear the file system for demonstration purposes.
End of explanation
# when running from the terminal
# python go_server_persistent.py --ts_length 100 --db_name 'demo'
# here we load the server as a subprocess for demonstration purposes
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(ts_length), '--data_dir', data_dir, '--db_name', db_name])
time.sleep(5) # make sure it loads completely
Explanation: Load the database server.
End of explanation
# when running from the terminal
# python go_webserver.py
# here we load the server as a subprocess for demonstration purposes
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # make sure it loads completely
Explanation: Load the database webserver.
End of explanation
from webserver import *
web_interface = WebInterface()
Explanation: Import the web interface and initialize it.
End of explanation
from timeseries import *
def tsmaker(m, s, j):
'''
Helper function: randomly generates a time series for testing.
Parameters
----------
m : float
Mean value for generating time series data
s : float
Standard deviation value for generating time series data
j : float
Quantifies the "jitter" to add to the time series data
Returns
-------
A time series and associated meta data.
'''
# generate metadata
meta = {}
meta['order'] = int(np.random.choice(
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]))
meta['blarg'] = int(np.random.choice([1, 2]))
# generate time series data
t = np.arange(0.0, 1.0, 0.01)
v = norm.pdf(t, m, s) + j * np.random.randn(ts_length)
# return time series and metadata
return meta, TimeSeries(t, v)
# generate sample time series
num_ts = 50
mus = np.random.uniform(low=0.0, high=1.0, size=num_ts)
sigs = np.random.uniform(low=0.05, high=0.4, size=num_ts)
jits = np.random.uniform(low=0.05, high=0.2, size=num_ts)
# initialize dictionaries for time series and their metadata
primary_keys = []
tsdict = {}
metadict = {}
# fill dictionaries with randomly generated entries for database
for i, m, s, j in zip(range(num_ts), mus, sigs, jits):
meta, tsrs = tsmaker(m, s, j) # generate data
pk = "ts-{}".format(i) # generate primary key
primary_keys.append(pk) # keep track of all primary keys
tsdict[pk] = tsrs # store time series data
metadict[pk] = meta # store metadata
# to assist with later testing
ts_keys = sorted(tsdict.keys())
# randomly choose time series as vantage points
num_vps = 5
vpkeys = list(np.random.choice(ts_keys, size=num_vps, replace=False))
vpdist = ['d_vp_{}'.format(i) for i in vpkeys]
Explanation: Generate Data
Let's create some dummy data to aid in our demonstration. You will need to import the timeseries package to work with the TimeSeries format.
Note: the database is persistent, so can store data between sessions, but we will start with an empty database here for demonstration purposes.
End of explanation
# check that the database is empty
web_interface.select()
# add stats trigger
web_interface.add_trigger('stats', 'insert_ts', ['mean', 'std'], None)
# insert the time series
for k in tsdict:
web_interface.insert_ts(k, tsdict[k])
# upsert the metadata
for k in tsdict:
web_interface.upsert_meta(k, metadict[k])
# add the vantage points
for i in range(num_vps):
web_interface.insert_vp(vpkeys[i])
Explanation: Insert Data
Let's start by loading the data into the database, using the REST API web interface.
End of explanation
# select all database entries; all metadata fields
results = web_interface.select(fields=[])
# we have the right number of database entries
assert len(results) == num_ts
# we have all the right primary keys
assert sorted(results.keys()) == ts_keys
# check that all the time series and metadata matches
for k in tsdict:
results = web_interface.select(fields=['ts'], md={'pk': k})
assert results[k]['ts'] == tsdict[k]
results = web_interface.select(fields=[], md={'pk': k})
for field in metadict[k]:
assert metadict[k][field] == results[k][field]
# check that the vantage points match
print('Vantage points selected:', vpkeys)
print('Vantage points in database:',
web_interface.select(fields=None, md={'vp': True}, additional={'sort_by': '+pk'}).keys())
# check that the vantage point distance fields have been created
print('Vantage point distance fields:', vpdist)
web_interface.select(fields=vpdist, additional={'sort_by': '+pk', 'limit': 1})
# check that the trigger has executed as expected (allowing for rounding errors)
for k in tsdict:
results = web_interface.select(fields=['mean', 'std'], md={'pk': k})
assert np.round(results[k]['mean'], 4) == np.round(tsdict[k].mean(), 4)
assert np.round(results[k]['std'], 4) == np.round(tsdict[k].std(), 4)
Explanation: Inspect Data
Let's inspect the data, to make sure that all the previous operations were successful.
End of explanation
_, query = tsmaker(np.random.uniform(low=0.0, high=1.0),
np.random.uniform(low=0.05, high=0.4),
np.random.uniform(low=0.05, high=0.2))
results_vp = web_interface.vp_similarity_search(query, 1)
results_vp
results_isax = web_interface.isax_similarity_search(query)
results_isax
Explanation: Let's generate an additional time series for similarity searches. We'll store the time series and the results of the similarity searches, so that we can compare against them after reloading the database.
End of explanation
results_tree = web_interface.isax_tree()
print(results_tree)
Explanation: Finally, let's store our iSAX tree representation.
End of explanation
os.kill(server.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
os.kill(webserver.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
web_interface = None
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(ts_length), '--data_dir', data_dir, '--db_name', db_name])
time.sleep(5) # give it time to load fully
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # give it time to load fully
web_interface = WebInterface()
Explanation: Terminate and Reload Database
Now that we know that everything is loaded, let's close the database and re-open it.
End of explanation
# select all database entries; all metadata fields
results = web_interface.select(fields=[])
# we have the right number of database entries
assert len(results) == num_ts
# we have all the right primary keys
assert sorted(results.keys()) == ts_keys
# check that all the time series and metadata matches
for k in tsdict:
results = web_interface.select(fields=['ts'], md={'pk': k})
assert results[k]['ts'] == tsdict[k]
results = web_interface.select(fields=[], md={'pk': k})
for field in metadict[k]:
assert metadict[k][field] == results[k][field]
# check that the vantage points match
print('Vantage points selected:', vpkeys)
print('Vantage points in database:',
web_interface.select(fields=None, md={'vp': True}, additional={'sort_by': '+pk'}).keys())
# check that isax tree has fully reloaded
print(web_interface.isax_tree())
# compare vantage point search results
results_vp == web_interface.vp_similarity_search(query, 1)
# compare isax search results
results_isax == web_interface.isax_similarity_search(query)
# check that the trigger is still there by loading new data
# create test time series
_, test = tsmaker(np.random.uniform(low=0.0, high=1.0),
np.random.uniform(low=0.05, high=0.4),
np.random.uniform(low=0.05, high=0.2))
# insert test time series
web_interface.insert_ts('test', test)
# check that mean and standard deviation have been calculated
print(web_interface.select(fields=['mean', 'std'], md={'pk': 'test'}))
# remove test time series
web_interface.delete_ts('test');
Explanation: Inspect Data
Let's repeat the previous tests to check whether our persistence architecture worked.
End of explanation
# terminate processes before exiting
os.kill(server.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
web_interface = None
webserver.terminate()
Explanation: We have successfully reloaded all of the database components from disk!
End of explanation |
798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
L'objectif est de réaliser des graphiques décrivant l'évolution mensuelle des prix des carburants depuis 1990.
Import de modules spécifiques à Openfisca, et import des données de prix des carburants
Step1: Utilisation de la date comme index
Step2: Changement des noms des variables pour être plus explicites
Step3: Réalisation du graphique | Python Code:
%matplotlib inline
from ipp_macro_series_parser.agregats_transports.parser_cleaner_prix_carburants import prix_mensuel_carburants_90_15
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_carburants
Explanation: L'objectif est de réaliser des graphiques décrivant l'évolution mensuelle des prix des carburants depuis 1990.
Import de modules spécifiques à Openfisca, et import des données de prix des carburants
End of explanation
prix_mensuel_carburants_90_15[['annee'] + ['mois']] = prix_mensuel_carburants_90_15[['annee'] + ['mois']].astype(str)
prix_mensuel_carburants_90_15['date'] = \
prix_mensuel_carburants_90_15['annee'] + '_' + prix_mensuel_carburants_90_15['mois']
prix_mensuel_carburants_90_15 = prix_mensuel_carburants_90_15.set_index('date')
Explanation: Utilisation de la date comme index
End of explanation
prix_mensuel_carburants_90_15.rename(columns = {'diesel_ht': 'prix diesel ht', 'diesel_ttc': 'prix diesel ttc',
'super_95_ht': 'prix SP95 ht', 'super_95_ttc': 'prix SP95 ttc'},
inplace = True)
Explanation: Changement des noms des variables pour être plus explicites
End of explanation
print 'Evolution du prix des carburants entre 1990 et 2015'
graph_builder_carburants(
prix_mensuel_carburants_90_15[['prix SP95 ttc'] + ['prix diesel ttc'] + ['prix SP95 ht'] + ['prix diesel ht']],
'prix carburants', 0.39, 1.025, 'darkgreen', 'darkred', 'lawngreen', 'orangered')
Explanation: Réalisation du graphique
End of explanation |
799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Exercise 4 - Code 2
Step1: Question 1a
Step2: Question 1b
Step3: Question 1c | Python Code:
# Load libraries
# Math
import numpy as np
# Visualization
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.max_open_warning': 0})
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import ndimage
# Print output of LFR code
import subprocess
# Sparse matrix
import scipy.sparse
import scipy.sparse.linalg
# 3D visualization
import pylab
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot
# Import data
import scipy.io
# Import functions in lib folder
import sys
sys.path.insert(1, 'lib')
# Import helper functions
%load_ext autoreload
%autoreload 2
from lib.utils import construct_kernel
from lib.utils import compute_kernel_kmeans_EM
from lib.utils import compute_kernel_kmeans_spectral
from lib.utils import compute_purity
# Import distance function
import sklearn.metrics.pairwise
# Remove warnings
import warnings
warnings.filterwarnings("ignore")
# Load MNIST raw data images
mat = scipy.io.loadmat('datasets/mnist_raw_data.mat')
X = mat['Xraw']
n = X.shape[0]
d = X.shape[1]
Cgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze()
nc = len(np.unique(Cgt))
print('Number of data =',n)
print('Data dimensionality =',d);
print('Number of classes =',nc);
Explanation: A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Exercise 4 - Code 2 : Unsupervised Learning
Unsupervised Clustering with Kernel K-Means
End of explanation
# Your code here
Explanation: Question 1a: What is the clustering accuracy of standard/linear K-Means?<br>
Hint: You may use functions Ker=construct_kernel(X,'linear') to compute the
linear kernel and [C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(n_classes,Ker,Theta,10) with Theta= np.ones(n) to run the standard K-Means algorithm, and accuracy = compute_purity(C_computed,C_solution,n_clusters) that returns the
accuracy.
End of explanation
# Your code here
Explanation: Question 1b: What is the clustering accuracy for the kernel K-Means algorithm with<br>
(1) Gaussian Kernel for the EM approach and the Spectral approach?<br>
(2) Polynomial Kernel for the EM approach and the Spectral approach?<br>
Hint: You may use functions Ker=construct_kernel(X,'gaussian') and Ker=construct_kernel(X,'polynomial',[1,0,2]) to compute the non-linear kernels<br>
Hint: You may use functions C_kmeans,__ = compute_kernel_kmeans_EM(K,Ker,Theta,10) for the EM kernel KMeans algorithm and C_kmeans,__ = compute_kernel_kmeans_spectral(K,Ker,Theta,10) for the Spectral kernel K-Means algorithm.<br>
End of explanation
# Your code here
Explanation: Question 1c: What is the clustering accuracy for the kernel K-Means algorithm with<br>
(1) KNN_Gaussian Kernel for the EM approach and the Spectral approach?<br>
(2) KNN_Cosine_Binary Kernel for the EM approach and the Spectral approach?<br>
You can test for the value KNN_kernel=50.<br>
Hint: You may use functions Ker = construct_kernel(X,'kNN_gaussian',KNN_kernel)
and Ker = construct_kernel(X,'kNN_cosine_binary',KNN_kernel) to compute the
non-linear kernels.
End of explanation |